From patchwork Fri Jun 2 14:39:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57EB9C7EE2D for ; Fri, 2 Jun 2023 14:38:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236016AbjFBOi6 (ORCPT ); Fri, 2 Jun 2023 10:38:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235054AbjFBOi4 (ORCPT ); Fri, 2 Jun 2023 10:38:56 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13A9D1A7 for ; Fri, 2 Jun 2023 07:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716736; x=1717252736; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lQ3OiAlNbXTGe/TqIU7sck5WWHAuec1i+TYH1O93SlU=; b=I9+QMY7L01oy8E60tHiUhJN8mJ/D2ZC40A7PRTHcETUnDJL9xds55SqD 3peZbTXngGlwn8C/6XAkXo8IY7f/R0CJN/k3f3ZvBXCDPR0C+Mkx+q6gi Hn1dPYBiqIIm667ZyRGEw7go39aV5h/h8IS56hkYvMC2nnbRTdrWm+RkB A3gQIJioj+QEsQcI8jb76GPvFaHL98uQr8MNiA11/M3V4tYkTggf24nfZ pXoPBaS7w+EAPNx9KRzXX/c8uT49uPOHa27OjgTfhBogJjvAnizdMwct6 3NRMabss2UVN86QQpAJ9XuwAzEP8bhZaOuUhaJVtmcV2ey+Unk4Rjn7yL w==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311214" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311214" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:38:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877436" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877436" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:38:53 -0700 From: Mathias Nyman To: Cc: , Udipto Goswami , Mathias Nyman Subject: [PATCH 01/11] usb: xhci: Remove unused udev from xhci_log_ctx trace event Date: Fri, 2 Jun 2023 17:39:59 +0300 Message-Id: <20230602144009.1225632-2-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Udipto Goswami xhci_log_ctx event is not utilizing the extracted udev to print out anything, hence removing it. Fixes: 1d27fabec068 ("xhci: add xhci_address_ctx trace event") Signed-off-by: Udipto Goswami Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-trace.h | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h index 4286dba5b157..7555c4ea7c4b 100644 --- a/drivers/usb/host/xhci-trace.h +++ b/drivers/usb/host/xhci-trace.h @@ -80,20 +80,16 @@ DECLARE_EVENT_CLASS(xhci_log_ctx, __field(dma_addr_t, ctx_dma) __field(u8 *, ctx_va) __field(unsigned, ctx_ep_num) - __field(int, slot_id) __dynamic_array(u32, ctx_data, ((HCC_64BYTE_CONTEXT(xhci->hcc_params) + 1) * 8) * ((ctx->type == XHCI_CTX_TYPE_INPUT) + ep_num + 1)) ), TP_fast_assign( - struct usb_device *udev; - udev = to_usb_device(xhci_to_hcd(xhci)->self.controller); __entry->ctx_64 = HCC_64BYTE_CONTEXT(xhci->hcc_params); __entry->ctx_type = ctx->type; __entry->ctx_dma = ctx->dma; __entry->ctx_va = ctx->bytes; - __entry->slot_id = udev->slot_id; __entry->ctx_ep_num = ep_num; memcpy(__get_dynamic_array(ctx_data), ctx->bytes, ((HCC_64BYTE_CONTEXT(xhci->hcc_params) + 1) * 32) * From patchwork Fri Jun 2 14:40:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A9FC7EE29 for ; Fri, 2 Jun 2023 14:39:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236019AbjFBOi7 (ORCPT ); Fri, 2 Jun 2023 10:38:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235983AbjFBOi6 (ORCPT ); Fri, 2 Jun 2023 10:38:58 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75FDEBC for ; Fri, 2 Jun 2023 07:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716737; x=1717252737; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bii7tSqYs2Ip8jq/jPKCgPJkJNQv4OKyr8zn/BP2vbk=; b=ECQqzYjPfF9cEFdyFO6nFd7nUM/x7yCdqWQTVWmGdWJlAkFUANIhGcl0 y7q50DBCtsjNflH4W+l6UjUtAIsUmFmnAMApApGLpA0/zWuFh/1v1KkhE Q0g1oA0yWPzdeDgZzqo362DsAG1czZLSoFta8G20PBe8L9H5r2jjIo+MO Zl987N0/5c+Ue+svEu2roRd2+LMiUg7AKMhH+isSzXTO7383rsmnJWOPG 0ThkIYrOFxl5TFmkoTh/Xjdbfj9aFUN3dT/JMpEgoBGs2lZPVHe4Z9cqj jkzPTy85nLmkyaDz+i6s1l0QBqZ6Rob8Cq8qwRp39RO5skyEdLEGSJNki A==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311229" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311229" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:38:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877441" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877441" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:38:55 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman Subject: [PATCH 02/11] xhci: Add usb cold attach (CAS) as a reason to resume root hub. Date: Fri, 2 Jun 2023 17:40:00 +0300 Message-Id: <20230602144009.1225632-3-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Check for the cold attach (CAS) bit while checking for other usb3 roothub port changes during host resume. The CAS bit is set if a USB 3 device is connected while the host is suspended in such a way it can't perform proper link training and progress the link to the enabled U0 state. If the CAS bit set we want to resume the root hub, and reset and enumerate the newly connected device. Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index b81313ffeb76..3a13e2453203 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -833,7 +833,7 @@ static bool xhci_pending_portevent(struct xhci_hcd *xhci) ports = xhci->usb3_rhub.ports; while (port_index--) { portsc = readl(ports[port_index]->addr); - if (portsc & PORT_CHANGE_MASK || + if (portsc & (PORT_CHANGE_MASK | PORT_CAS) || (portsc & PORT_PLS_MASK) == XDEV_RESUME) return true; } From patchwork Fri Jun 2 14:40:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D56EC77B7A for ; Fri, 2 Jun 2023 14:39:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236028AbjFBOjA (ORCPT ); Fri, 2 Jun 2023 10:39:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235983AbjFBOi7 (ORCPT ); Fri, 2 Jun 2023 10:38:59 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0614EBC for ; Fri, 2 Jun 2023 07:38:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716739; x=1717252739; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HDM/7vbm+BPyZ52vnGwOz5UNyHQ6oKM2WK+m3+r0RqA=; b=hlELZfZ4RHNJ5Bw5HK6Y+eJxPYbNEoDHM3hiVw1MpCAINOKxM45fK8L4 l/yzmHZ0+G5qCiliSERrpXr0whIc/ZIgVbskFFl+6Rjtrj6eWAVCPim/5 q9/wk0GWPxEdzx/JdJiG3JtyabwWaLaAdMjzXFZeMiwuzmT4FeJXUtkak iHVuTJiNcoh5PIZK59HPXPb6awRn9eUGbFgc/z8ePZGTHBfPwy1Xxbpl4 vuBZotONFGk/YjqABFjyoUpcnoexpzONvkYF7L/vVrQA/idBbK3QpyY7/ ET0p9xnMBEk4pjTnUcdOmQxDHwjk5Gb2wbnjpFS+ZfgQFA3V51ev/Hemg Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311235" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311235" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:38:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877446" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877446" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:38:56 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman Subject: [PATCH 03/11] xhci: Don't require a valid get_quirks() function pointer during xhci setup Date: Fri, 2 Jun 2023 17:40:01 +0300 Message-Id: <20230602144009.1225632-4-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Not all platforms drivers need to set up custom quirks during the xhci generic setup. Allow them to pass NULL as the function pointer when calling xhci_gen_setup() Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 3a13e2453203..176969bf2d5c 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -5181,7 +5181,8 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks) xhci->quirks |= quirks; - get_quirks(dev, xhci); + if (get_quirks) + get_quirks(dev, xhci); /* In xhci controllers which follow xhci 1.0 spec gives a spurious * success event after a short transfer. This quirk will ignore such From patchwork Fri Jun 2 14:40:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 801A1C7EE29 for ; Fri, 2 Jun 2023 14:39:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236037AbjFBOjC (ORCPT ); Fri, 2 Jun 2023 10:39:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235983AbjFBOjC (ORCPT ); Fri, 2 Jun 2023 10:39:02 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8BF7BC for ; Fri, 2 Jun 2023 07:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716740; x=1717252740; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2pSxjpyjgRloB577VdJRXEiElOZoR1fwgdiCAS06D0k=; b=As18JRUEs/VrZ1mtLPt7Tm4Qr9FhR1MIj7FJoouyFVhF6F3uuXkyUO2Q 7fY0NNcXYH1ghMTshy6+QaPpIDAiEfmVRCHYD8h2cbQ9A1lzLaOb1c4C7 w3peSBSovjDhAnW4uIbG7BemqNSYJBg08u/gfxGlrnNeiHHZZXEUGa3na 2esyDVBDRbqLikAXZpX2eI1zPFPrpNMrk7HqQ3hGHeCHMX1KcHCyqefgK Ia2+PaUC1Of9S5I9herAocZeePTlX0BZM5sN237jThc5b1ARe5djIkuwW Efyj/uKCK/28AzWKwm8C5ljhBfT/pglLvKfxeqOue6aen9fGTHfYAYUSb Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311245" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311245" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877448" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877448" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:38:58 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman Subject: [PATCH 04/11] xhci: get rid of XHCI_PLAT quirk that used to prevent MSI setup Date: Fri, 2 Jun 2023 17:40:02 +0300 Message-Id: <20230602144009.1225632-5-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The XHCI_PLAT quirk was only needed to ensure non-PCI xHC host avoided setting up MSI interrupts in generic xhci codepaths. The MSI setup code is now moved to PCI specific xhci-pci.c file so the quirk is no longer needed. Remove setting the XHCI_PLAT quirk for HiSilocon SoC xHC, NVIDIA Tegra xHC, MediaTek xHC, the generic xhci-plat driver, and the checks for XHCI_PLAT in xhci-pci.c MSI setup code. Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-histb.c | 12 +----------- drivers/usb/host/xhci-mtk.c | 6 ------ drivers/usb/host/xhci-pci.c | 7 ------- drivers/usb/host/xhci-plat.c | 7 +------ drivers/usb/host/xhci-tegra.c | 1 - drivers/usb/host/xhci.h | 2 +- 6 files changed, 3 insertions(+), 32 deletions(-) diff --git a/drivers/usb/host/xhci-histb.c b/drivers/usb/host/xhci-histb.c index d8aba07e802d..f9a4a4b0eb57 100644 --- a/drivers/usb/host/xhci-histb.c +++ b/drivers/usb/host/xhci-histb.c @@ -164,16 +164,6 @@ static void xhci_histb_host_disable(struct xhci_hcd_histb *histb) clk_disable_unprepare(histb->bus_clk); } -static void xhci_histb_quirks(struct device *dev, struct xhci_hcd *xhci) -{ - /* - * As of now platform drivers don't provide MSI support so we ensure - * here that the generic code does not try to make a pci_dev from our - * dev struct in order to setup MSI - */ - xhci->quirks |= XHCI_PLAT; -} - /* called during probe() after chip reset completes */ static int xhci_histb_setup(struct usb_hcd *hcd) { @@ -186,7 +176,7 @@ static int xhci_histb_setup(struct usb_hcd *hcd) return ret; } - return xhci_gen_setup(hcd, xhci_histb_quirks); + return xhci_gen_setup(hcd, NULL); } static const struct xhci_driver_overrides xhci_histb_overrides __initconst = { diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c index 8d9a55b0281b..51d9d4d4f6a5 100644 --- a/drivers/usb/host/xhci-mtk.c +++ b/drivers/usb/host/xhci-mtk.c @@ -418,12 +418,6 @@ static void xhci_mtk_quirks(struct device *dev, struct xhci_hcd *xhci) struct usb_hcd *hcd = xhci_to_hcd(xhci); struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd); - /* - * As of now platform drivers don't provide MSI support so we ensure - * here that the generic code does not try to make a pci_dev from our - * dev struct in order to setup MSI - */ - xhci->quirks |= XHCI_PLAT; xhci->quirks |= XHCI_MTK_HOST; /* * MTK host controller gives a spurious successful event after a diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 69a5cb7eba38..611703f863e0 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -108,9 +108,6 @@ static void xhci_cleanup_msix(struct xhci_hcd *xhci) struct usb_hcd *hcd = xhci_to_hcd(xhci); struct pci_dev *pdev = to_pci_dev(hcd->self.controller); - if (xhci->quirks & XHCI_PLAT) - return; - /* return if using legacy interrupt */ if (hcd->irq > 0) return; @@ -208,10 +205,6 @@ static int xhci_try_enable_msi(struct usb_hcd *hcd) struct pci_dev *pdev; int ret; - /* The xhci platform device has set up IRQs through usb_add_hcd. */ - if (xhci->quirks & XHCI_PLAT) - return 0; - pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller); /* * Some Fresco Logic host controllers advertise MSI, but fail to diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c index a52d73c2cd80..1d902d1513bc 100644 --- a/drivers/usb/host/xhci-plat.c +++ b/drivers/usb/host/xhci-plat.c @@ -78,12 +78,7 @@ static void xhci_plat_quirks(struct device *dev, struct xhci_hcd *xhci) { struct xhci_plat_priv *priv = xhci_to_priv(xhci); - /* - * As of now platform drivers don't provide MSI support so we ensure - * here that the generic code does not try to make a pci_dev from our - * dev struct in order to setup MSI - */ - xhci->quirks |= XHCI_PLAT | priv->quirks; + xhci->quirks |= priv->quirks; } /* called during probe() after chip reset completes */ diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c index f124483044a2..6ca8a37e53e1 100644 --- a/drivers/usb/host/xhci-tegra.c +++ b/drivers/usb/host/xhci-tegra.c @@ -2663,7 +2663,6 @@ static void tegra_xhci_quirks(struct device *dev, struct xhci_hcd *xhci) { struct tegra_xusb *tegra = dev_get_drvdata(dev); - xhci->quirks |= XHCI_PLAT; if (tegra && tegra->soc->lpm_support) xhci->quirks |= XHCI_LPM_SUPPORT; } diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index f845c15073ba..56e318c384ff 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1874,7 +1874,7 @@ struct xhci_hcd { #define XHCI_SPURIOUS_REBOOT BIT_ULL(13) #define XHCI_COMP_MODE_QUIRK BIT_ULL(14) #define XHCI_AVOID_BEI BIT_ULL(15) -#define XHCI_PLAT BIT_ULL(16) +#define XHCI_PLAT BIT_ULL(16) /* Deprecated */ #define XHCI_SLOW_SUSPEND BIT_ULL(17) #define XHCI_SPURIOUS_WAKEUP BIT_ULL(18) /* For controllers with a broken beyond repair streams implementation */ From patchwork Fri Jun 2 14:40:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62B4BC7EE29 for ; Fri, 2 Jun 2023 14:39:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236039AbjFBOjZ (ORCPT ); Fri, 2 Jun 2023 10:39:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235281AbjFBOjY (ORCPT ); Fri, 2 Jun 2023 10:39:24 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A0AD99 for ; Fri, 2 Jun 2023 07:39:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716763; x=1717252763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1H86g8tJzazeGUBadrNF59cRnRCB8pM5IcNycbxJfhE=; b=XCgjRKUEeGe/R7I+bY0ASKiDW1Oh9lNE//eabnT3QICG+JbOU3Tnd3ln WHfFmu/bw6lwM1t7B3L7FHWokKx8koK7Gu8+gNLyP3h7LrIFjCAhgRxzx MhrYiZjUlvG6pGiYtK1kjWobnEtWzhQ4xEs4+Acb+Ixy0KNvzE6b+Hd8V ci6bel2KmBiu+QaOfYVkY7hYShotvYRg42OSGpSuFMJRq7GctqxEl1r5k EG8RiUoMOLu+X45q2B6VEuOAgCJjO+ZlSHjBe8eVnsn8sdruVQV6L1m9I SMYxgpwE3z1xNp4tC43jVYvBeBW+Sd6emkflMDw0tSmYWwgrVaKRmuKN2 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311256" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311256" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877450" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877450" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:38:59 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman Subject: [PATCH 05/11] xhci: split allocate interrupter into separate alloacte and add parts Date: Fri, 2 Jun 2023 17:40:03 +0300 Message-Id: <20230602144009.1225632-6-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The current function that both allocates and adds the interrupter isn't optimal when using several interrupters. The array of interrupters need to be protected with a lock while adding or removing interrupters. If memory is allocated under the default xhci spinlock then GFP_KERNEL can't be used. There is no need to allocate the interrupter memory under the lock, so split this code into separate unlocked allocate part, and a lock protected add part. Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 78 +++++++++++++++++++------------------ 1 file changed, 41 insertions(+), 37 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 7e106bd804ca..2bf8121f4d36 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1831,13 +1831,15 @@ xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir) * low or high 32 bits of ERSTBA immediately causes the controller to * dereference the partially cleared 64 bit address, causing IOMMU error. */ - tmp = readl(&ir->ir_set->erst_size); - tmp &= ERST_SIZE_MASK; - writel(tmp, &ir->ir_set->erst_size); - - tmp64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue); - tmp64 &= (u64) ERST_PTR_MASK; - xhci_write_64(xhci, tmp64, &ir->ir_set->erst_dequeue); + if (ir->ir_set) { + tmp = readl(&ir->ir_set->erst_size); + tmp &= ERST_SIZE_MASK; + writel(tmp, &ir->ir_set->erst_size); + + tmp64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue); + tmp64 &= (u64) ERST_PTR_MASK; + xhci_write_64(xhci, tmp64, &ir->ir_set->erst_dequeue); + } /* free interrrupter event ring */ if (ir->event_ring) @@ -2227,43 +2229,50 @@ static int xhci_setup_port_arrays(struct xhci_hcd *xhci, gfp_t flags) } static struct xhci_interrupter * -xhci_alloc_interrupter(struct xhci_hcd *xhci, unsigned int intr_num, gfp_t flags) +xhci_alloc_interrupter(struct xhci_hcd *xhci, gfp_t flags) { struct device *dev = xhci_to_hcd(xhci)->self.sysdev; struct xhci_interrupter *ir; - u64 erst_base; - u32 erst_size; int ret; - if (intr_num > xhci->max_interrupters) { - xhci_warn(xhci, "Can't allocate interrupter %d, max interrupters %d\n", - intr_num, xhci->max_interrupters); - return NULL; - } - - if (xhci->interrupter) { - xhci_warn(xhci, "Can't allocate already set up interrupter %d\n", intr_num); - return NULL; - } - ir = kzalloc_node(sizeof(*ir), flags, dev_to_node(dev)); if (!ir) return NULL; - ir->ir_set = &xhci->run_regs->ir_set[intr_num]; ir->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT, 0, flags); if (!ir->event_ring) { - xhci_warn(xhci, "Failed to allocate interrupter %d event ring\n", intr_num); - goto fail_ir; + xhci_warn(xhci, "Failed to allocate interrupter event ring\n"); + kfree(ir); + return NULL; } ret = xhci_alloc_erst(xhci, ir->event_ring, &ir->erst, flags); if (ret) { - xhci_warn(xhci, "Failed to allocate interrupter %d erst\n", intr_num); - goto fail_ev; + xhci_warn(xhci, "Failed to allocate interrupter erst\n"); + xhci_ring_free(xhci, ir->event_ring); + kfree(ir); + return NULL; + } + + return ir; +} + +static int +xhci_add_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir, + unsigned int intr_num) +{ + u64 erst_base; + u32 erst_size; + if (intr_num > xhci->max_interrupters) { + xhci_warn(xhci, "Can't add interrupter %d, max interrupters %d\n", + intr_num, xhci->max_interrupters); + return -EINVAL; } + + ir->ir_set = &xhci->run_regs->ir_set[intr_num]; + /* set ERST count with the number of entries in the segment table */ erst_size = readl(&ir->ir_set->erst_size); erst_size &= ERST_SIZE_MASK; @@ -2278,14 +2287,7 @@ xhci_alloc_interrupter(struct xhci_hcd *xhci, unsigned int intr_num, gfp_t flags /* Set the event ring dequeue address of this interrupter */ xhci_set_hc_event_deq(xhci, ir); - return ir; - -fail_ev: - xhci_ring_free(xhci, ir->event_ring); -fail_ir: - kfree(ir); - - return NULL; + return 0; } int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) @@ -2407,15 +2409,17 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) "// Doorbell array is located at offset 0x%x from cap regs base addr", val); xhci->dba = (void __iomem *) xhci->cap_regs + val; - /* Set ir_set to interrupt register set 0 */ - /* allocate and set up primary interrupter with an event ring. */ + /* Allocate and set up primary interrupter 0 with an event ring. */ xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Allocating primary event ring"); - xhci->interrupter = xhci_alloc_interrupter(xhci, 0, flags); + xhci->interrupter = xhci_alloc_interrupter(xhci, flags); if (!xhci->interrupter) goto fail; + if (xhci_add_interrupter(xhci, xhci->interrupter, 0)) + goto fail; + xhci->isoc_bei_interval = AVOID_BEI_INTERVAL_MAX; /* From patchwork Fri Jun 2 14:40:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BC8CC77B7A for ; Fri, 2 Jun 2023 14:39:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236049AbjFBOj1 (ORCPT ); Fri, 2 Jun 2023 10:39:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236040AbjFBOj0 (ORCPT ); Fri, 2 Jun 2023 10:39:26 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A57399 for ; Fri, 2 Jun 2023 07:39:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716765; x=1717252765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KhW0va3LtHxC/RvB0/3Af4CpPZzZ7ffK25zEqzJxRYI=; b=RssZMAPX7xrG8zvl8uyRofTfFSKCjJE+tOBpAgcZgLu9MK1FbeAHh3IS 5ZXjQqJkaxS2KwT9zEEe4BoY1qFVa+ao5HDTcv028DAZIJeQpo5j18BDU qmn53dtaVn196kBMtYIE81HhruVFbtNymAIBf6BdfP7BL92QhErcYpuu5 SKRUAN58fqmad4JVuqv5FpUeIQ7CS8hVytuV2n/gBWKQWwa35vAJKntzM 5/2L+ZLs1QhirqyCK4DoeHI4tyugpri2uKK+SciE/5yB4wJdNA4lj9AZe KW3mXpuGCR0IQHiemhx3LGYm4CPpubUSPBiNoCJhGboA/Q+14kOI9R2cg w==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311273" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311273" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877455" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877455" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:01 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman , chao zeng , Miller Hunter Subject: [PATCH 06/11] xhci: Fix transfer ring expansion size calculation Date: Fri, 2 Jun 2023 17:40:04 +0300 Message-Id: <20230602144009.1225632-7-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The amount of new TRBs needed is calculated incorrectly when expanding a transfer ring. The room_on_ring() helper will correctly report that the ring needs expansion if the enqueue pointer is about to reach the dequeue segment. If enqueue reaches the dequeue segment then there is no easy way to expand the ring by adding new segments between enqueue and dequeue. This leads to ring expansion even if num_trbs_free is larger than num_trbs we are queueing. As a result we try to store a negative number in a unsigned int, leading to a huge percieved trb need, and doubling of ring size. Rework and rename the room_on_ring() to a helper that checks if ring needs expansion, and return number of new segments needed. Don't rely on the tracked ring->num_trbs_free value as turns out it has been unreliable. Use ring enqueue and dequeue positions to determine expansion need. The unsigned int issue was first reported first Chao zeng, and a bit later seen in a real world bug. Reported-by: chao zeng Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217242 Tested-by: Miller Hunter Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 14 ++------ drivers/usb/host/xhci-ring.c | 64 +++++++++++++++++++++++------------- 2 files changed, 45 insertions(+), 33 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 2bf8121f4d36..c6f3ef2334a3 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -422,22 +422,14 @@ void xhci_free_endpoint_ring(struct xhci_hcd *xhci, * Allocate a new ring which has same segment numbers and link the two rings. */ int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, - unsigned int num_trbs, gfp_t flags) + unsigned int num_new_segs, gfp_t flags) { struct xhci_segment *first; struct xhci_segment *last; - unsigned int num_segs; - unsigned int num_segs_needed; int ret; - num_segs_needed = (num_trbs + (TRBS_PER_SEGMENT - 1) - 1) / - (TRBS_PER_SEGMENT - 1); - - /* Allocate number of segments we needed, or double the ring size */ - num_segs = max(ring->num_segs, num_segs_needed); - ret = xhci_alloc_segments_for_ring(xhci, &first, &last, - num_segs, ring->cycle_state, ring->type, + num_new_segs, ring->cycle_state, ring->type, ring->bounce_buf_len, flags); if (ret) return -ENOMEM; @@ -457,7 +449,7 @@ int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, return ret; } - xhci_link_rings(xhci, ring, first, last, num_segs); + xhci_link_rings(xhci, ring, first, last, num_new_segs); trace_xhci_ring_expansion(ring); xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, "ring expansion succeed, now has %d segments", diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 2bc82b3a2f98..2722dca6218e 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -299,22 +299,45 @@ static int xhci_num_trbs_to(struct xhci_segment *start_seg, union xhci_trb *star /* * Check to see if there's room to enqueue num_trbs on the ring and make sure * enqueue pointer will not advance into dequeue segment. See rules above. + * return number of new segments needed to ensure this. */ -static inline int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring, - unsigned int num_trbs) + +static unsigned int xhci_ring_expansion_needed(struct xhci_hcd *xhci, struct xhci_ring *ring, + unsigned int num_trbs) { - int num_trbs_in_deq_seg; + struct xhci_segment *seg; + int trbs_past_seg; + int enq_used; + int new_segs; + + enq_used = ring->enqueue - ring->enq_seg->trbs; - if (ring->num_trbs_free < num_trbs) + /* how many trbs will be queued past the enqueue segment? */ + trbs_past_seg = enq_used + num_trbs - (TRBS_PER_SEGMENT - 1); + + if (trbs_past_seg <= 0) return 0; - if (ring->type != TYPE_COMMAND && ring->type != TYPE_EVENT) { - num_trbs_in_deq_seg = ring->dequeue - ring->deq_seg->trbs; - if (ring->num_trbs_free < num_trbs + num_trbs_in_deq_seg) - return 0; + /* Empty ring special case, enqueue stuck on link trb while dequeue advanced */ + if (trb_is_link(ring->enqueue) && ring->enq_seg->next->trbs == ring->dequeue) + return 0; + + new_segs = 1 + (trbs_past_seg / (TRBS_PER_SEGMENT - 1)); + seg = ring->enq_seg; + + while (new_segs > 0) { + seg = seg->next; + if (seg == ring->deq_seg) { + xhci_dbg(xhci, "Ring expansion by %d segments needed\n", + new_segs); + xhci_dbg(xhci, "Adding %d trbs moves enq %d trbs into deq seg\n", + num_trbs, trbs_past_seg % TRBS_PER_SEGMENT); + return new_segs; + } + new_segs--; } - return 1; + return 0; } /* Ring the host controller doorbell after placing a command on the ring */ @@ -3165,13 +3188,13 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, /* * Does various checks on the endpoint ring, and makes it ready to queue num_trbs. - * FIXME allocate segments if the ring is full. + * expand ring if it start to be full. */ static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, u32 ep_state, unsigned int num_trbs, gfp_t mem_flags) { - unsigned int num_trbs_needed; unsigned int link_trb_count = 0; + unsigned int new_segs = 0; /* Make sure the endpoint has been added to xHC schedule */ switch (ep_state) { @@ -3202,20 +3225,17 @@ static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, return -EINVAL; } - while (1) { - if (room_on_ring(xhci, ep_ring, num_trbs)) - break; - - if (ep_ring == xhci->cmd_ring) { - xhci_err(xhci, "Do not support expand command ring\n"); - return -ENOMEM; - } + if (ep_ring != xhci->cmd_ring) { + new_segs = xhci_ring_expansion_needed(xhci, ep_ring, num_trbs); + } else if (ep_ring->num_trbs_free <= num_trbs) { + xhci_err(xhci, "Do not support expand command ring\n"); + return -ENOMEM; + } + if (new_segs) { xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, "ERROR no room on ep ring, try ring expansion"); - num_trbs_needed = num_trbs - ep_ring->num_trbs_free; - if (xhci_ring_expansion(xhci, ep_ring, num_trbs_needed, - mem_flags)) { + if (xhci_ring_expansion(xhci, ep_ring, new_segs, mem_flags)) { xhci_err(xhci, "Ring expansion failed\n"); return -ENOMEM; } From patchwork Fri Jun 2 14:40:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 784E8C7EE2F for ; Fri, 2 Jun 2023 14:39:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236066AbjFBOj3 (ORCPT ); Fri, 2 Jun 2023 10:39:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236054AbjFBOj1 (ORCPT ); Fri, 2 Jun 2023 10:39:27 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D16D1B3 for ; Fri, 2 Jun 2023 07:39:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716765; x=1717252765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IcvkshRp7Dbv/hartovyXleQlB+aS6huoPOOayPH+r4=; b=JfFdn/frMHCEYolbnfniLT8mmZydl9acUATtekRQb0LSnCP56xtnuGrk XBSdz9TkhjTejSCVS4vUZH1ttkF1TJSY9jaRIPRzA0brKXVNnv+V9XrMS ViINnrJpXUYvFyIHTe18seVdfvMO8mhIjhLFfK6Aax0o3yWOfumJlyBKm LRVMpxY7azsExp2i3YxQUB6WIZUjvp6kmEMxQUAL1w4LP2AsPWLcEukWT A8ftHm2BY8zqx21io2jC+oHELUrebKjCzl0fhp8wIFF3OGz3LgG1vixHm meoMmnEfaaPrs8Ez3o4Vh6VwNXXGThFZjZV3bNBq44oDUv+g+JZlPF1L5 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311282" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311282" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877458" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877458" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:03 -0700 From: Mathias Nyman To: Cc: , Mathias Nyman , Miller Hunter Subject: [PATCH 07/11] xhci: Stop unnecessary tracking of free trbs in a ring Date: Fri, 2 Jun 2023 17:40:05 +0300 Message-Id: <20230602144009.1225632-8-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Trying to keep track of free trbs in a ring by adding and subtracting deltas each time a enqueue or dequeue is increased or moved has proven to be buggy and complicated, especially over long periods of time. Recently a bug in counting free trbs was fixed, now taking into account cancelled URBs that were turned into no-ops, preventing free_trbs to slowly wander off causing unnecessary ring expansion. See commit fe82f16aafda ("xhci: Fix incorrect tracking of free space on transfer rings") Turns out its a lot easier to just calculate the numer of free TRB based on ring size and the current enqueue and dequeue pointer values. This is currently only needed for the command ring as multi segment transfer rings already ensures there is enough room the ring during the ring expansion check. We could get rid of the ring->num_trbs_free entry completely, but as the xhci DbC code also uses it we don't clean that up in this patch. Reported-by: Miller Hunter Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217242 Tested-by: Miller Hunter Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 1 - drivers/usb/host/xhci-ring.c | 75 +++++++++++++++-------------------- drivers/usb/host/xhci-trace.h | 5 +-- drivers/usb/host/xhci.h | 3 +- 4 files changed, 35 insertions(+), 49 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index c6f3ef2334a3..a8e9a4bb1537 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -143,7 +143,6 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring, xhci_link_segments(ring->enq_seg, first, ring->type, chain_links); xhci_link_segments(last, next, ring->type, chain_links); ring->num_segs += num_segs; - ring->num_trbs_free += (TRBS_PER_SEGMENT - 1) * num_segs; if (ring->type != TYPE_EVENT && ring->enq_seg == ring->last_seg) { ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 2722dca6218e..646ff125def5 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -174,12 +174,10 @@ void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring) /* All other rings have link trbs */ if (!trb_is_link(ring->dequeue)) { - if (last_trb_on_seg(ring->deq_seg, ring->dequeue)) { + if (last_trb_on_seg(ring->deq_seg, ring->dequeue)) xhci_warn(xhci, "Missing link TRB at end of segment\n"); - } else { + else ring->dequeue++; - ring->num_trbs_free++; - } } while (trb_is_link(ring->dequeue)) { @@ -221,9 +219,6 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, unsigned int link_trb_count = 0; chain = le32_to_cpu(ring->enqueue->generic.field[3]) & TRB_CHAIN; - /* If this is not event ring, there is one less usable TRB */ - if (!trb_is_link(ring->enqueue)) - ring->num_trbs_free--; if (last_trb_on_seg(ring->enq_seg, ring->enqueue)) { xhci_err(xhci, "Tried to move enqueue past ring segment\n"); @@ -276,24 +271,40 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, trace_xhci_inc_enq(ring); } -static int xhci_num_trbs_to(struct xhci_segment *start_seg, union xhci_trb *start, - struct xhci_segment *end_seg, union xhci_trb *end, - unsigned int num_segs) +/* + * Return number of free normal TRBs from enqueue to dequeue pointer on ring. + * Not counting an assumed link TRB at end of each TRBS_PER_SEGMENT sized segment. + * Only for transfer and command rings where driver is the producer, not for + * event rings. + */ +static unsigned int xhci_num_trbs_free(struct xhci_hcd *xhci, struct xhci_ring *ring) { + struct xhci_segment *enq_seg = ring->enq_seg; + union xhci_trb *enq = ring->enqueue; union xhci_trb *last_on_seg; - int num = 0; + unsigned int free = 0; int i = 0; + /* Ring might be empty even if enq != deq if enq is left on a link trb */ + if (trb_is_link(enq)) { + enq_seg = enq_seg->next; + enq = enq_seg->trbs; + } + + /* Empty ring, common case, don't walk the segments */ + if (enq == ring->dequeue) + return ring->num_segs * (TRBS_PER_SEGMENT - 1); + do { - if (start_seg == end_seg && end >= start) - return num + (end - start); - last_on_seg = &start_seg->trbs[TRBS_PER_SEGMENT - 1]; - num += last_on_seg - start; - start_seg = start_seg->next; - start = start_seg->trbs; - } while (i++ <= num_segs); - - return -EINVAL; + if (ring->deq_seg == enq_seg && ring->dequeue >= enq) + return free + (ring->dequeue - enq); + last_on_seg = &enq_seg->trbs[TRBS_PER_SEGMENT - 1]; + free += last_on_seg - enq; + enq_seg = enq_seg->next; + enq = enq_seg->trbs; + } while (i++ <= ring->num_segs); + + return free; } /* @@ -1291,10 +1302,7 @@ static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci, unsigned int ep_index) { union xhci_trb *dequeue_temp; - int num_trbs_free_temp; - bool revert = false; - num_trbs_free_temp = ep_ring->num_trbs_free; dequeue_temp = ep_ring->dequeue; /* If we get two back-to-back stalls, and the first stalled transfer @@ -1310,7 +1318,6 @@ static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci, while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) { /* We have more usable TRBs */ - ep_ring->num_trbs_free++; ep_ring->dequeue++; if (trb_is_link(ep_ring->dequeue)) { if (ep_ring->dequeue == @@ -1320,15 +1327,10 @@ static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci, ep_ring->dequeue = ep_ring->deq_seg->trbs; } if (ep_ring->dequeue == dequeue_temp) { - revert = true; + xhci_dbg(xhci, "Unable to find new dequeue pointer\n"); break; } } - - if (revert) { - xhci_dbg(xhci, "Unable to find new dequeue pointer\n"); - ep_ring->num_trbs_free = num_trbs_free_temp; - } } /* @@ -2183,7 +2185,6 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, u32 trb_comp_code) { struct xhci_ep_ctx *ep_ctx; - int trbs_freed; ep_ctx = xhci_get_ep_ctx(xhci, ep->vdev->out_ctx, ep->ep_index); @@ -2253,13 +2254,6 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, } /* Update ring dequeue pointer */ - trbs_freed = xhci_num_trbs_to(ep_ring->deq_seg, ep_ring->dequeue, - td->last_trb_seg, td->last_trb, - ep_ring->num_segs); - if (trbs_freed < 0) - xhci_dbg(xhci, "Failed to count freed trbs at TD finish\n"); - else - ep_ring->num_trbs_free += trbs_freed; ep_ring->dequeue = td->last_trb; ep_ring->deq_seg = td->last_trb_seg; inc_deq(xhci, ep_ring); @@ -2483,7 +2477,6 @@ static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, /* Update ring dequeue pointer */ ep->ring->dequeue = td->last_trb; ep->ring->deq_seg = td->last_trb_seg; - ep->ring->num_trbs_free += td->num_trbs - 1; inc_deq(xhci, ep->ring); return xhci_td_cleanup(xhci, td, ep->ring, status); @@ -3227,7 +3220,7 @@ static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, if (ep_ring != xhci->cmd_ring) { new_segs = xhci_ring_expansion_needed(xhci, ep_ring, num_trbs); - } else if (ep_ring->num_trbs_free <= num_trbs) { + } else if (xhci_num_trbs_free(xhci, ep_ring) <= num_trbs) { xhci_err(xhci, "Do not support expand command ring\n"); return -ENOMEM; } @@ -4205,7 +4198,6 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags, ep_ring->enqueue = urb_priv->td[0].first_trb; ep_ring->enq_seg = urb_priv->td[0].start_seg; ep_ring->cycle_state = start_cycle; - ep_ring->num_trbs_free = ep_ring->num_trbs_free_temp; usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb); return ret; } @@ -4287,7 +4279,6 @@ int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags, } skip_start_over: - ep_ring->num_trbs_free_temp = ep_ring->num_trbs_free; return xhci_queue_isoc_tx(xhci, mem_flags, urb, slot_id, ep_index); } diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h index 7555c4ea7c4b..d6b32f2ad90e 100644 --- a/drivers/usb/host/xhci-trace.h +++ b/drivers/usb/host/xhci-trace.h @@ -458,7 +458,6 @@ DECLARE_EVENT_CLASS(xhci_log_ring, __field(unsigned int, num_segs) __field(unsigned int, stream_id) __field(unsigned int, cycle_state) - __field(unsigned int, num_trbs_free) __field(unsigned int, bounce_buf_len) ), TP_fast_assign( @@ -469,18 +468,16 @@ DECLARE_EVENT_CLASS(xhci_log_ring, __entry->enq_seg = ring->enq_seg->dma; __entry->deq_seg = ring->deq_seg->dma; __entry->cycle_state = ring->cycle_state; - __entry->num_trbs_free = ring->num_trbs_free; __entry->bounce_buf_len = ring->bounce_buf_len; __entry->enq = xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue); __entry->deq = xhci_trb_virt_to_dma(ring->deq_seg, ring->dequeue); ), - TP_printk("%s %p: enq %pad(%pad) deq %pad(%pad) segs %d stream %d free_trbs %d bounce %d cycle %d", + TP_printk("%s %p: enq %pad(%pad) deq %pad(%pad) segs %d stream %d bounce %d cycle %d", xhci_ring_type_string(__entry->type), __entry->ring, &__entry->enq, &__entry->enq_seg, &__entry->deq, &__entry->deq_seg, __entry->num_segs, __entry->stream_id, - __entry->num_trbs_free, __entry->bounce_buf_len, __entry->cycle_state ) diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 56e318c384ff..456e1c8ca005 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1633,8 +1633,7 @@ struct xhci_ring { u32 cycle_state; unsigned int stream_id; unsigned int num_segs; - unsigned int num_trbs_free; - unsigned int num_trbs_free_temp; + unsigned int num_trbs_free; /* used only by xhci DbC */ unsigned int bounce_buf_len; enum xhci_ring_type type; bool last_td_was_short; From patchwork Fri Jun 2 14:40:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04C0BC7EE2E for ; Fri, 2 Jun 2023 14:39:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236065AbjFBOj2 (ORCPT ); Fri, 2 Jun 2023 10:39:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236047AbjFBOj1 (ORCPT ); Fri, 2 Jun 2023 10:39:27 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0BC1BC; Fri, 2 Jun 2023 07:39:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716766; x=1717252766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=03LuCHhFCmzwPVIBc2Q48ocGBZh0HC5gzPK8HstY9ig=; b=gJV6fTGrFbT6K7qH/wo1HVIMV9C0F9E+cyOiGlK4iakDVCPEM7rlmQfF Ra0gUWt53iIEzTI8hXMldb3vD86yma7EDDebIrSa3atikdD+haW+e/jLT 37ZPbDFbPybU6K0PwEcJZ8NoVUk+dxKNCE7YLU8YIXkVqHA7E8lOT7BZy EW+zo7qOyWYs5bSlXO1BJan2sz+ZNFi1bgDuN4yLtQ4hyvYboxiDK0VaG lPTJjprLsce46B01DlQoCYCW/t81AbM7zsHzsNAeedsBXDIrsKWIJQDWe BKvgINA2T4aN24vwu1mhYeHC33To7tx4hXcZbofXhkqrxAtpKEMXXGx+9 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311291" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311291" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877460" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877460" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:04 -0700 From: Mathias Nyman To: Cc: , Weitao Wang , stable@vger.kernel.org, Mathias Nyman Subject: [PATCH 08/11] xhci: Fix resume issue of some ZHAOXIN hosts Date: Fri, 2 Jun 2023 17:40:06 +0300 Message-Id: <20230602144009.1225632-9-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Weitao Wang On ZHAOXIN ZX-100 project, xHCI can't work normally after resume from system Sx state. To fix this issue, when resume from system Sx state, reinitialize xHCI instead of restore. So, Add XHCI_RESET_ON_RESUME quirk for ZX-100 to fix issue of resuming from system Sx state. Cc: stable@vger.kernel.org Signed-off-by: Weitao Wang Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-pci.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 611703f863e0..2103c6c0a967 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -521,6 +521,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4)) xhci->quirks |= XHCI_NO_SOFT_RETRY; + if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + if (pdev->device == 0x9202) + xhci->quirks |= XHCI_RESET_ON_RESUME; + } + /* xHC spec requires PCI devices to support D3hot and D3cold */ if (xhci->hci_version >= 0x120) xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; From patchwork Fri Jun 2 14:40:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D599C7EE2D for ; Fri, 2 Jun 2023 14:39:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236069AbjFBOjb (ORCPT ); Fri, 2 Jun 2023 10:39:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236063AbjFBOj2 (ORCPT ); Fri, 2 Jun 2023 10:39:28 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B7BBBC; Fri, 2 Jun 2023 07:39:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716767; x=1717252767; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5nWKMbqd9q2HgLFAcJns3PyXTC6zpm60jITrCNDlTzk=; b=Gkz/fs/B2+RQe61rT6tnQzNPkDXoE/PGQeT0NFgaaIstTRasu0MsuIdf ohKkGY32Bv1CRdJl1Dg90Cz/1s2eru1BB37xyLIZtIl2uNV/tPqkcLa3f a67mYmguBa7YD06Fah1vzffzfRcYJ2VeyC8Hqsk0wAJU0VXW+PH1F7u7S WQwaIbR9YCIsEw0/ULWxuifInbxDNjcLCVymqwyXCtcJfZc7yBwu0Cx9R 1SU/FNtG8hyNMMeLXbWbimxoPY0l0wycEhpoD63WoZsAQSmlO1qsG1/dh tldn+/AytQLhywkX7GwHHPvhSGNllSdrSwHFISqlhEROpdUC9H4hmJqhR w==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311305" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311305" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877464" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877464" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:06 -0700 From: Mathias Nyman To: Cc: , Weitao Wang , stable@vger.kernel.org, Mathias Nyman Subject: [PATCH 09/11] xhci: Fix TRB prefetch issue of ZHAOXIN hosts Date: Fri, 2 Jun 2023 17:40:07 +0300 Message-Id: <20230602144009.1225632-10-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Weitao Wang On some ZHAOXIN hosts, xHCI will prefetch TRB for performance improvement. However this TRB prefetch mechanism may cross page boundary, which may access memory not allocated by xHCI driver. In order to fix this issue, two pages was allocated for a segment and only the first page will be used. And add a quirk XHCI_ZHAOXIN_TRB_FETCH for this issue. Cc: stable@vger.kernel.org Signed-off-by: Weitao Wang Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 8 ++++++-- drivers/usb/host/xhci-pci.c | 7 ++++++- drivers/usb/host/xhci.h | 1 + 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index a8e9a4bb1537..c4170421bc9c 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2345,8 +2345,12 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, - TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH) + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2); + else + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); /* See Table 46 and Note on Figure 55 */ xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev, diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 2103c6c0a967..3aad946bab68 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -522,8 +522,13 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) xhci->quirks |= XHCI_NO_SOFT_RETRY; if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { - if (pdev->device == 0x9202) + if (pdev->device == 0x9202) { xhci->quirks |= XHCI_RESET_ON_RESUME; + xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; + } + + if (pdev->device == 0x9203) + xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; } /* xHC spec requires PCI devices to support D3hot and D3cold */ diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 456e1c8ca005..5a495126c8ba 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1904,6 +1904,7 @@ struct xhci_hcd { #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43) #define XHCI_RESET_TO_DEFAULT BIT_ULL(44) +#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45) unsigned int num_active_eps; unsigned int limit_active_eps; From patchwork Fri Jun 2 14:40:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BB2BC7EE2E for ; Fri, 2 Jun 2023 14:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236075AbjFBOjc (ORCPT ); Fri, 2 Jun 2023 10:39:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236047AbjFBOj3 (ORCPT ); Fri, 2 Jun 2023 10:39:29 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCAEC1BD; Fri, 2 Jun 2023 07:39:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716767; x=1717252767; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Fg1OmSdDSXyTznxFIFbqh97jb21+zgWkwkHKMj3mhK8=; b=kJvlcj2dED33zPeJ0nLS94LgXNma7f+8wjD4DntWBpjHnJaGeSUGZRF8 EBGhHDfc0yKC3ogrIxmfVwh6QAAmIXQlr6RVPVXX1xQIKVSX2VxcP91SJ PfYoTV4zgZ8kBUbnCm930aM8bh6qOGibZCFwTDT/gkPKBG71waxR6Ki9q mP7NDj2P126v6qo0QE4c3WYvPb5dIFzhLI855WtwHeeqm1d3yNYDV2U+X q/vbOWrX2FufiCylmMVyq2OlwwwoC6Xk2Wbl5Czplc/sLByp/hdemfwNT aPfwwusgHylmp9DcT92FTrvIVGEnrb+RAhpP2Yv6bbe4jhcFbWPE5qMsC A==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311317" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311317" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877466" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877466" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:08 -0700 From: Mathias Nyman To: Cc: , Weitao Wang , Mathias Nyman , stable@vger.kernel.org Subject: [PATCH 10/11] xhci: Show ZHAOXIN xHCI root hub speed correctly Date: Fri, 2 Jun 2023 17:40:08 +0300 Message-Id: <20230602144009.1225632-11-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Weitao Wang Some ZHAOXIN xHCI controllers follow usb3.1 spec, but only support gen1 speed 5Gbps. While in Linux kernel, if xHCI suspport usb3.1, root hub speed will show on 10Gbps. To fix this issue of ZHAOXIN xHCI platforms, read usb speed ID supported by xHCI to determine root hub speed. And add a quirk XHCI_ZHAOXIN_HOST for this issue. [fix warning about uninitialized symbol -Mathias] Suggested-by: Mathias Nyman Cc: stable@vger.kernel.org Signed-off-by: Weitao Wang Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-mem.c | 31 ++++++++++++++++++++++++------- drivers/usb/host/xhci-pci.c | 2 ++ drivers/usb/host/xhci.h | 1 + 3 files changed, 27 insertions(+), 7 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index c4170421bc9c..19a402123de0 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1961,7 +1961,7 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, { u32 temp, port_offset, port_count; int i; - u8 major_revision, minor_revision; + u8 major_revision, minor_revision, tmp_minor_revision; struct xhci_hub *rhub; struct device *dev = xhci_to_hcd(xhci)->self.sysdev; struct xhci_port_cap *port_cap; @@ -1981,6 +1981,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, */ if (minor_revision > 0x00 && minor_revision < 0x10) minor_revision <<= 4; + /* + * Some zhaoxin's xHCI controller that follow usb3.1 spec + * but only support Gen1. + */ + if (xhci->quirks & XHCI_ZHAOXIN_HOST) { + tmp_minor_revision = minor_revision; + minor_revision = 0; + } + } else if (major_revision <= 0x02) { rhub = &xhci->usb2_rhub; } else { @@ -1989,10 +1998,6 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, /* Ignoring port protocol we can't understand. FIXME */ return; } - rhub->maj_rev = XHCI_EXT_PORT_MAJOR(temp); - - if (rhub->min_rev < minor_revision) - rhub->min_rev = minor_revision; /* Port offset and count in the third dword, see section 7.2 */ temp = readl(addr + 2); @@ -2010,8 +2015,6 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, if (xhci->num_port_caps > max_caps) return; - port_cap->maj_rev = major_revision; - port_cap->min_rev = minor_revision; port_cap->psi_count = XHCI_EXT_PORT_PSIC(temp); if (port_cap->psi_count) { @@ -2032,6 +2035,11 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, XHCI_EXT_PORT_PSIV(port_cap->psi[i - 1]))) port_cap->psi_uid_count++; + if (xhci->quirks & XHCI_ZHAOXIN_HOST && + major_revision == 0x03 && + XHCI_EXT_PORT_PSIV(port_cap->psi[i]) >= 5) + minor_revision = tmp_minor_revision; + xhci_dbg(xhci, "PSIV:%d PSIE:%d PLT:%d PFD:%d LP:%d PSIM:%d\n", XHCI_EXT_PORT_PSIV(port_cap->psi[i]), XHCI_EXT_PORT_PSIE(port_cap->psi[i]), @@ -2041,6 +2049,15 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, XHCI_EXT_PORT_PSIM(port_cap->psi[i])); } } + + rhub->maj_rev = major_revision; + + if (rhub->min_rev < minor_revision) + rhub->min_rev = minor_revision; + + port_cap->maj_rev = major_revision; + port_cap->min_rev = minor_revision; + /* cache usb2 port capabilities */ if (major_revision < 0x03 && xhci->num_ext_caps < max_caps) xhci->ext_caps[xhci->num_ext_caps++] = temp; diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 3aad946bab68..88c16d91fb69 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -522,6 +522,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) xhci->quirks |= XHCI_NO_SOFT_RETRY; if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + xhci->quirks |= XHCI_ZHAOXIN_HOST; + if (pdev->device == 0x9202) { xhci->quirks |= XHCI_RESET_ON_RESUME; xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 5a495126c8ba..7e282b4522c0 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1905,6 +1905,7 @@ struct xhci_hcd { #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43) #define XHCI_RESET_TO_DEFAULT BIT_ULL(44) #define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45) +#define XHCI_ZHAOXIN_HOST BIT_ULL(46) unsigned int num_active_eps; unsigned int limit_active_eps; From patchwork Fri Jun 2 14:40:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 688986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCBBCC77B7A for ; Fri, 2 Jun 2023 14:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236080AbjFBOjd (ORCPT ); Fri, 2 Jun 2023 10:39:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236056AbjFBOj3 (ORCPT ); Fri, 2 Jun 2023 10:39:29 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1D231A7 for ; Fri, 2 Jun 2023 07:39:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685716768; x=1717252768; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C10LHrMOU7i8EyGeecYwaHg1yr0j9pLyC13iU7avK0s=; b=lQBkSp8Fbp963ZrNkafmHotaePgiKDtNb8pP4xsdrwz5WUDkdoWd3D+c wY0QTKtn7Fm1JnA2jat/RS04cvcqY2KXI4MIjc0G8mjmbE7F4x9IpbLZg psCOQ46NGWpqoDXYE75lAmqb78ZLNjldaBsAt7QW3DBWB2ceNDUyJT+XU F2el2fHevz4Zqz3+i0cgA41LoZk97v1yHJaCuaC5vwhOyTxoDW8FuRkQL g7fNQlOyJdOJgH8fHFmPktXBAq0i3wu4iM1gku8iV0Dzc3sXSmbwTWVP3 ErIeUSVEKwbjNr7miMqvrWcKbjGoE/qtN3f/Is2uLIorf5LfWIr/lPTCF A==; X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="358311330" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="358311330" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2023 07:39:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10729"; a="707877469" X-IronPort-AV: E=Sophos;i="6.00,213,1681196400"; d="scan'208";a="707877469" Received: from mattu-haswell.fi.intel.com ([10.237.72.199]) by orsmga002.jf.intel.com with ESMTP; 02 Jun 2023 07:39:10 -0700 From: Mathias Nyman To: Cc: , Weitao Wang , Mathias Nyman Subject: [PATCH 11/11] xhci: Add ZHAOXIN xHCI host U1/U2 feature support Date: Fri, 2 Jun 2023 17:40:09 +0300 Message-Id: <20230602144009.1225632-12-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> References: <20230602144009.1225632-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Weitao Wang Add U1/U2 feature support of xHCI for ZHAOXIN. Since both INTEL and ZHAOXIN need to check the tier where the device is located to determine whether to enabled U1/U2, remove the previous INTEL U1/U2 tier policy and add common policy in xhci_check_tier_policy. If vendor has specific U1/U2 enable policy,quirks can be add to declare. Suggested-by: Mathias Nyman Signed-off-by: Weitao Wang Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-pci.c | 1 + drivers/usb/host/xhci.c | 43 ++++++++++++++++--------------------- 2 files changed, 19 insertions(+), 25 deletions(-) diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 88c16d91fb69..c6742bae41c0 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -523,6 +523,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { xhci->quirks |= XHCI_ZHAOXIN_HOST; + xhci->quirks |= XHCI_LPM_SUPPORT; if (pdev->device == 0x9202) { xhci->quirks |= XHCI_RESET_ON_RESUME; diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 176969bf2d5c..5b73a7d281ed 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -4605,7 +4605,7 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci, } } - if (xhci->quirks & XHCI_INTEL_HOST) + if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST)) timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); else timeout_ns = udev->u1_params.sel; @@ -4669,7 +4669,7 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci, } } - if (xhci->quirks & XHCI_INTEL_HOST) + if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST)) timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); else timeout_ns = udev->u2_params.sel; @@ -4741,37 +4741,30 @@ static int xhci_update_timeout_for_interface(struct xhci_hcd *xhci, return 0; } -static int xhci_check_intel_tier_policy(struct usb_device *udev, +static int xhci_check_tier_policy(struct xhci_hcd *xhci, + struct usb_device *udev, enum usb3_link_state state) { - struct usb_device *parent; - unsigned int num_hubs; + struct usb_device *parent = udev->parent; + int tier = 1; /* roothub is tier1 */ - /* Don't enable U1 if the device is on a 2nd tier hub or lower. */ - for (parent = udev->parent, num_hubs = 0; parent->parent; - parent = parent->parent) - num_hubs++; + while (parent) { + parent = parent->parent; + tier++; + } - if (num_hubs < 2) - return 0; + if (xhci->quirks & XHCI_INTEL_HOST && tier > 3) + goto fail; + if (xhci->quirks & XHCI_ZHAOXIN_HOST && tier > 2) + goto fail; - dev_dbg(&udev->dev, "Disabling U1/U2 link state for device" - " below second-tier hub.\n"); - dev_dbg(&udev->dev, "Plug device into first-tier hub " - "to decrease power consumption.\n"); + return 0; +fail: + dev_dbg(&udev->dev, "Tier policy prevents U1/U2 LPM states for devices at tier %d\n", + tier); return -E2BIG; } -static int xhci_check_tier_policy(struct xhci_hcd *xhci, - struct usb_device *udev, - enum usb3_link_state state) -{ - if (xhci->quirks & XHCI_INTEL_HOST) - return xhci_check_intel_tier_policy(udev, state); - else - return 0; -} - /* Returns the U1 or U2 timeout that should be enabled. * If the tier check or timeout setting functions return with a non-zero exit * code, that means the timeout value has been finalized and we shouldn't look