From patchwork Tue May 18 14:09:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 443016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B205C433B4 for ; Tue, 18 May 2021 14:09:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 390B461185 for ; Tue, 18 May 2021 14:09:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349804AbhEROLE (ORCPT ); Tue, 18 May 2021 10:11:04 -0400 Received: from mga06.intel.com ([134.134.136.31]:62760 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229542AbhEROLC (ORCPT ); Tue, 18 May 2021 10:11:02 -0400 IronPort-SDR: qVC0NmVGf8kUiGTk/d4hhmiAUcX5XabTK1jjAzDFHHvTDo+NdLLaM1PQSLkQSw0QTNZhgTSNVK 8+D2yHM5qS5A== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="261948705" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="261948705" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:43 -0700 IronPort-SDR: fwd329/JDVd818EU6yYP31PjidEPG9rtJX2MTt/g1lUt+5GJlj4wKvH44ECFl7tfLH26MiFY1l ReWQ/6SEEknA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="411288251" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 18 May 2021 07:09:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 753FF12F; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 1/8] thunderbolt: Make tb_port_type() take const parameter Date: Tue, 18 May 2021 17:09:55 +0300 Message-Id: <20210518141002.63616-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The function does not modify the object in any way so make the parameter const to reflect this. No functional changes intended. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 4d4bc50a3c44..0edc452c2ac9 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -459,7 +459,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw) /* port utility functions */ -static const char *tb_port_type(struct tb_regs_port_header *port) +static const char *tb_port_type(const struct tb_regs_port_header *port) { switch (port->type >> 16) { case 0: From patchwork Tue May 18 14:09:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 443013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50AC3C433B4 for ; Tue, 18 May 2021 14:09:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AA5E611BD for ; Tue, 18 May 2021 14:09:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349811AbhEROLN (ORCPT ); Tue, 18 May 2021 10:11:13 -0400 Received: from mga04.intel.com ([192.55.52.120]:3122 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349775AbhEROLL (ORCPT ); Tue, 18 May 2021 10:11:11 -0400 IronPort-SDR: Nw+zqFl5sIR+9lD5nt5RtgDCp2VgGTnBeuTARVtEF7Kev1htu4c0hnUQxtOittz6RsNVdwZb9Z k9Mzsr6tH/sQ== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="198769231" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="198769231" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:43 -0700 IronPort-SDR: rWyxuBmKfgAnx2gtvyiv79c1b78bU7Hk13CWYNu48LInaae5+qGouazFoAZySLEY29LQ9AwjAZ jnGfpYccnKJw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="404835738" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 18 May 2021 07:09:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 7FB0D50E; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 2/8] thunderbolt: Move nfc_credits field to struct tb_path_hop Date: Tue, 18 May 2021 17:09:56 +0300 Message-Id: <20210518141002.63616-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org With the USB4 buffer allocation the number of credits (and non-flow credits) may be different depending on the router buffer allocation preferences. To allow this move the nfc_credits field to struct tb_path_hop. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/path.c | 4 ++-- drivers/thunderbolt/tb.h | 5 +++-- drivers/thunderbolt/tunnel.c | 25 ++++++++++++++----------- 3 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c index f63e205a35d9..564e2f42cebd 100644 --- a/drivers/thunderbolt/path.c +++ b/drivers/thunderbolt/path.c @@ -367,7 +367,7 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop) int i, res; for (i = first_hop; i < path->path_length; i++) { res = tb_port_add_nfc_credits(path->hops[i].in_port, - -path->nfc_credits); + -path->hops[i].nfc_credits); if (res) tb_port_warn(path->hops[i].in_port, "nfc credits deallocation failed for hop %d\n", @@ -502,7 +502,7 @@ int tb_path_activate(struct tb_path *path) /* Add non flow controlled credits. */ for (i = path->path_length - 1; i >= 0; i--) { res = tb_port_add_nfc_credits(path->hops[i].in_port, - path->nfc_credits); + path->hops[i].nfc_credits); if (res) { __tb_path_deallocate_nfc(path, i); goto err; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 60a987c748ca..b4bc25b82fdb 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -256,6 +256,8 @@ struct tb_retimer { * @next_hop_index: HopID of the packet when it is routed out from @out_port * @initial_credits: Number of initial flow control credits allocated for * the path + * @nfc_credits: Number of non-flow controlled buffers allocated for the + * @in_port. * * Hop configuration is always done on the IN port of a switch. * in_port and out_port have to be on the same switch. Packets arriving on @@ -275,6 +277,7 @@ struct tb_path_hop { int in_counter_index; int next_hop_index; unsigned int initial_credits; + unsigned int nfc_credits; }; /** @@ -297,7 +300,6 @@ enum tb_path_port { * struct tb_path - a unidirectional path between two ports * @tb: Pointer to the domain structure * @name: Name of the path (used for debugging) - * @nfc_credits: Number of non flow controlled credits allocated for the path * @ingress_shared_buffer: Shared buffering used for ingress ports on the path * @egress_shared_buffer: Shared buffering used for egress ports on the path * @ingress_fc_enable: Flow control for ingress ports on the path @@ -318,7 +320,6 @@ enum tb_path_port { struct tb_path { struct tb *tb; const char *name; - int nfc_credits; enum tb_path_port ingress_shared_buffer; enum tb_path_port egress_shared_buffer; enum tb_path_port ingress_fc_enable; diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index e1979bed7146..5be0f31949f1 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -119,7 +119,6 @@ static void tb_pci_init_path(struct tb_path *path) path->priority = 3; path->weight = 1; path->drop_packages = 0; - path->nfc_credits = 0; path->hops[0].initial_credits = 7; if (path->path_length > 1) path->hops[1].initial_credits = @@ -616,7 +615,7 @@ static void tb_dp_init_aux_path(struct tb_path *path) static void tb_dp_init_video_path(struct tb_path *path, bool discover) { - u32 nfc_credits = path->hops[0].in_port->config.nfc_credits; + int i; path->egress_fc_enable = TB_PATH_NONE; path->egress_shared_buffer = TB_PATH_NONE; @@ -625,15 +624,20 @@ static void tb_dp_init_video_path(struct tb_path *path, bool discover) path->priority = 1; path->weight = 1; - if (discover) { - path->nfc_credits = nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK; - } else { - u32 max_credits; + for (i = 0; i < path->path_length; i++) { + u32 nfc_credits = path->hops[i].in_port->config.nfc_credits; - max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> - ADP_CS_4_TOTAL_BUFFERS_SHIFT; - /* Leave some credits for AUX path */ - path->nfc_credits = min(max_credits - 2, 12U); + if (discover) { + path->hops[i].nfc_credits = + nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK; + } else { + u32 max_credits; + + max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> + ADP_CS_4_TOTAL_BUFFERS_SHIFT; + /* Leave some credits for AUX path */ + path->hops[i].nfc_credits = min(max_credits - 2, 12U); + } } } @@ -1076,7 +1080,6 @@ static void tb_usb3_init_path(struct tb_path *path) path->priority = 3; path->weight = 3; path->drop_packages = 0; - path->nfc_credits = 0; path->hops[0].initial_credits = 7; if (path->path_length > 1) path->hops[1].initial_credits = From patchwork Tue May 18 14:09:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 441648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 114B3C433ED for ; Tue, 18 May 2021 14:09:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FB3611BD for ; Tue, 18 May 2021 14:09:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349805AbhEROLG (ORCPT ); Tue, 18 May 2021 10:11:06 -0400 Received: from mga11.intel.com ([192.55.52.93]:35580 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232783AbhEROLC (ORCPT ); Tue, 18 May 2021 10:11:02 -0400 IronPort-SDR: KVvMlC4SclKmA+2/HzVq0YL66zlwC8kGRQjXG+WOyjNNs44A3e402STDZ6hCyWx0xkJmjns0mI qhvT+00ELK7Q== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="197632960" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="197632960" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:43 -0700 IronPort-SDR: sYaIbpymRlRCCIdc77LzQJA9AcaP3g5jZfEEnZgOfVI1SgJftSWhIFXaDZj4d0akpOMFHzoMfh 9CRQ2gzv6JaQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="544131778" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga004.jf.intel.com with ESMTP; 18 May 2021 07:09:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 8EF8A752; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 3/8] thunderbolt: Wait for the lanes to actually bond Date: Tue, 18 May 2021 17:09:57 +0300 Message-Id: <20210518141002.63616-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org It may take some time until the two lanes enter bonded state so poll for the link width to match what is expected before going forward. This ensures the link is in expected state before we start establishing paths through it. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 50 +++++++++++++++++++++++++++++++++-- drivers/thunderbolt/tb.h | 2 ++ drivers/thunderbolt/xdomain.c | 8 ++++++ 3 files changed, 58 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 0edc452c2ac9..d5420eefe25d 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -991,8 +991,11 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width) * tb_port_lane_bonding_enable() - Enable bonding on port * @port: port to enable * - * Enable bonding by setting the link width of the port and the - * other port in case of dual link port. + * Enable bonding by setting the link width of the port and the other + * port in case of dual link port. Does not wait for the link to + * actually reach the bonded state so caller needs to call + * tb_port_wait_for_link_width() before enabling any paths through the + * link to make sure the link is in expected state. * * Return: %0 in case of success and negative errno in case of error */ @@ -1043,6 +1046,36 @@ void tb_port_lane_bonding_disable(struct tb_port *port) tb_port_set_link_width(port, 1); } +/** + * tb_port_wait_for_link_width() - Wait until link reaches specific width + * @port: Port to wait for + * @width: Expected link width (%1 or %2) + * @timeout_msec: Timeout in ms how long to wait + * + * Should be used after both ends of the link have been bonded (or + * bonding has been disabled) to wait until the link actually reaches + * the expected state. Returns %-ETIMEDOUT if the @width was not reached + * within the given timeout, %0 if it did. + */ +int tb_port_wait_for_link_width(struct tb_port *port, int width, + int timeout_msec) +{ + ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); + int ret; + + do { + ret = tb_port_get_link_width(port); + if (ret < 0) + return ret; + else if (ret == width) + return 0; + + usleep_range(1000, 2000); + } while (ktime_before(ktime_get(), timeout)); + + return -ETIMEDOUT; +} + static int tb_port_start_lane_initialization(struct tb_port *port) { int ret; @@ -2432,6 +2465,12 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) return ret; } + ret = tb_port_wait_for_link_width(down, 2, 100); + if (ret) { + tb_port_warn(down, "timeout enabling lane bonding\n"); + return ret; + } + tb_switch_update_link_attributes(sw); tb_sw_dbg(sw, "lane bonding enabled\n"); @@ -2462,6 +2501,13 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw) tb_port_lane_bonding_disable(up); tb_port_lane_bonding_disable(down); + /* + * It is fine if we get other errors as the router might have + * been unplugged. + */ + if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT) + tb_sw_warn(sw, "timeout disabling lane bonding\n"); + tb_switch_update_link_attributes(sw); tb_sw_dbg(sw, "lane bonding disabled\n"); } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index b4bc25b82fdb..e6c5e8fc7de7 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -883,6 +883,8 @@ int tb_port_get_link_width(struct tb_port *port); int tb_port_state(struct tb_port *port); int tb_port_lane_bonding_enable(struct tb_port *port); void tb_port_lane_bonding_disable(struct tb_port *port); +int tb_port_wait_for_link_width(struct tb_port *port, int width, + int timeout_msec); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index b21d99d59412..39c2da112238 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -1527,6 +1527,12 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) return ret; } + ret = tb_port_wait_for_link_width(port, 2, 100); + if (ret) { + tb_port_warn(port, "timeout enabling lane bonding\n"); + return ret; + } + tb_xdomain_update_link_attributes(xd); dev_dbg(&xd->dev, "lane bonding enabled\n"); @@ -1548,6 +1554,8 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) port = tb_port_at(xd->route, tb_xdomain_parent(xd)); if (port->dual_link_port) { tb_port_lane_bonding_disable(port); + if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) + tb_port_warn(port, "timeout disabling lane bonding\n"); tb_port_disable(port->dual_link_port); tb_xdomain_update_link_attributes(xd); From patchwork Tue May 18 14:09:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 441646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F706C43460 for ; Tue, 18 May 2021 14:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04FDC61185 for ; Tue, 18 May 2021 14:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349819AbhEROLM (ORCPT ); Tue, 18 May 2021 10:11:12 -0400 Received: from mga04.intel.com ([192.55.52.120]:3122 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349811AbhEROLL (ORCPT ); Tue, 18 May 2021 10:11:11 -0400 IronPort-SDR: SmCbS1vdrK5LeDYTie1EhxwMCVzGYd4jXNgclRZGVSW5Km/DEm6ZxSxVA4rNjwYZJd29V/mdJ7 2ACMLED9DPmA== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="198769228" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="198769228" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:43 -0700 IronPort-SDR: P0QiWHTh12QzIUI1u2KxcRJ9mF+ki4eM5HePqY2gqRptbS2xwuJhhFNRs+QZ0QWWStZZH0vf3I 7cW/doCwlSdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="630452398" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 18 May 2021 07:09:41 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 99BDD796; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 4/8] thunderbolt: Read router preferred credit allocation information Date: Tue, 18 May 2021 17:09:58 +0300 Message-Id: <20210518141002.63616-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 routers must expose their preferred credit (buffer) allocation information through router operation. This information tells the connection manager how the router prefers its buffers to be allocated to get the expected bandwidth for the supported protocols. Read this information and store it as part of struct tb_switch for each USB4 router. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 51 +++++++++-- drivers/thunderbolt/tb.h | 22 +++++ drivers/thunderbolt/tb_regs.h | 1 + drivers/thunderbolt/usb4.c | 155 ++++++++++++++++++++++++++++++++++ 4 files changed, 221 insertions(+), 8 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index d5420eefe25d..ac6cb304c49f 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -488,17 +488,21 @@ static const char *tb_port_type(const struct tb_regs_port_header *port) } } -static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port) +static void tb_dump_port(struct tb *tb, const struct tb_port *port) { + const struct tb_regs_port_header *regs = &port->config; + tb_dbg(tb, " Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n", - port->port_number, port->vendor_id, port->device_id, - port->revision, port->thunderbolt_version, tb_port_type(port), - port->type); + regs->port_number, regs->vendor_id, regs->device_id, + regs->revision, regs->thunderbolt_version, tb_port_type(regs), + regs->type); tb_dbg(tb, " Max hop id (in/out): %d/%d\n", - port->max_in_hop_id, port->max_out_hop_id); - tb_dbg(tb, " Max counters: %d\n", port->max_counters); - tb_dbg(tb, " NFC Credits: %#x\n", port->nfc_credits); + regs->max_in_hop_id, regs->max_out_hop_id); + tb_dbg(tb, " Max counters: %d\n", regs->max_counters); + tb_dbg(tb, " NFC Credits: %#x\n", regs->nfc_credits); + tb_dbg(tb, " Credits (total/control): %u/%u\n", port->total_credits, + port->ctl_credits); } /** @@ -738,13 +742,32 @@ static int tb_init_port(struct tb_port *port) cap = tb_port_find_cap(port, TB_PORT_CAP_USB4); if (cap > 0) port->cap_usb4 = cap; + + /* + * USB4 ports the buffers allocated for the control path + * can be read from the path config space. Legacy + * devices we use hard-coded value. + */ + if (tb_switch_is_usb4(port->sw)) { + struct tb_regs_hop hop; + + if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2)) + port->ctl_credits = hop.initial_credits; + } + if (!port->ctl_credits) + port->ctl_credits = 2; + } else if (port->port != 0) { cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP); if (cap > 0) port->cap_adap = cap; } - tb_dump_port(port->sw->tb, &port->config); + port->total_credits = + (port->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> + ADP_CS_4_TOTAL_BUFFERS_SHIFT; + + tb_dump_port(port->sw->tb, port); INIT_LIST_HEAD(&port->list); return 0; @@ -2575,6 +2598,16 @@ void tb_switch_unconfigure_link(struct tb_switch *sw) tb_lc_unconfigure_port(down); } +static void tb_switch_credits_init(struct tb_switch *sw) +{ + if (tb_switch_is_icm(sw)) + return; + if (!tb_switch_is_usb4(sw)) + return; + if (usb4_switch_credits_init(sw)) + tb_sw_info(sw, "failed to determine preferred buffer allocation, using defaults\n"); +} + /** * tb_switch_add() - Add a switch to the domain * @sw: Switch to add @@ -2605,6 +2638,8 @@ int tb_switch_add(struct tb_switch *sw) } if (!sw->safe_mode) { + tb_switch_credits_init(sw); + /* read drom */ ret = tb_drom_read(sw); if (ret) { diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index e6c5e8fc7de7..a8190009815c 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -136,6 +136,12 @@ struct tb_switch_tmu { * @rpm_complete: Completion used to wait for runtime resume to * complete (ICM only) * @quirks: Quirks used for this Thunderbolt switch + * @credit_allocation: Are the below buffer allocation parameters valid + * @max_usb3_credits: Router preferred number of buffers for USB 3.x + * @min_dp_aux_credits: Router preferred minimum number of buffers for DP AUX + * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN + * @max_pcie_credits: Router preferred number of buffers for PCIe + * @max_dma_credits: Router preferred number of buffers for DMA/P2P * * When the switch is being added or removed to the domain (other * switches) you need to have domain lock held. @@ -178,6 +184,12 @@ struct tb_switch { u8 depth; struct completion rpm_complete; unsigned long quirks; + bool credit_allocation; + unsigned int max_usb3_credits; + unsigned int min_dp_aux_credits; + unsigned int min_dp_main_credits; + unsigned int max_pcie_credits; + unsigned int max_dma_credits; }; /** @@ -199,6 +211,8 @@ struct tb_switch { * @in_hopids: Currently allocated input HopIDs * @out_hopids: Currently allocated output HopIDs * @list: Used to link ports to DP resources list + * @total_credits: Total number of buffers available for this port + * @ctl_credits: Buffers reserved for control path * * In USB4 terminology this structure represents an adapter (protocol or * lane adapter). @@ -220,6 +234,8 @@ struct tb_port { struct ida in_hopids; struct ida out_hopids; struct list_head list; + unsigned int total_credits; + unsigned int ctl_credits; }; /** @@ -866,6 +882,11 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid); struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, struct tb_port *prev); +static inline bool tb_port_use_credit_allocation(const struct tb_port *port) +{ + return tb_port_is_null(port) && port->sw->credit_allocation; +} + /** * tb_for_each_port_on_path() - Iterate over each port on path * @src: Source port @@ -994,6 +1015,7 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address, const void *buf, size_t size); int usb4_switch_nvm_authenticate(struct tb_switch *sw); int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status); +int usb4_switch_credits_init(struct tb_switch *sw); bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in); int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in); int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in); diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 113d7903b183..484f25be2849 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -229,6 +229,7 @@ enum usb4_switch_op { USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23, USB4_SWITCH_OP_DROM_READ = 0x24, USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25, + USB4_SWITCH_OP_BUFFER_ALLOC = 0x33, }; /* Router TMU configuration */ diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index b56af7b0a093..edab8ea63c0b 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -36,6 +36,20 @@ enum usb4_sb_target { #define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0) +#define USB4_BA_LENGTH_MASK GENMASK(7, 0) +#define USB4_BA_INDEX_MASK GENMASK(15, 0) + +enum usb4_ba_index { + USB4_BA_MAX_USB3 = 0x1, + USB4_BA_MIN_DP_AUX = 0x2, + USB4_BA_MIN_DP_MAIN = 0x3, + USB4_BA_MAX_PCIE = 0x4, + USB4_BA_MAX_HI = 0x5, +}; + +#define USB4_BA_VALUE_MASK GENMASK(31, 16) +#define USB4_BA_VALUE_SHIFT 16 + static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, u32 value, int timeout_msec) { @@ -669,6 +683,147 @@ int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status) return 0; } +/** + * usb4_switch_credits_init() - Read buffer allocation parameters + * @sw: USB4 router + * + * Reads @sw buffer allocation parameters and initializes @sw buffer + * allocation fields accordingly. Specifically @sw->credits_allocation + * is set to %true if these parameters can be used in tunneling. + * + * Returns %0 on success and negative errno otherwise. + */ +int usb4_switch_credits_init(struct tb_switch *sw) +{ + int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma; + int ret, length, i, nports; + const struct tb_port *port; + u32 data[NVM_DATA_DWORDS]; + u32 metadata = 0; + u8 status = 0; + + memset(data, 0, sizeof(data)); + ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_BUFFER_ALLOC, &metadata, + &status, NULL, 0, data, ARRAY_SIZE(data)); + if (ret) + return ret; + if (status) + return -EIO; + + length = metadata & USB4_BA_LENGTH_MASK; + if (WARN_ON(length > ARRAY_SIZE(data))) + return -EMSGSIZE; + + max_usb3 = -1; + min_dp_aux = -1; + min_dp_main = -1; + max_pcie = -1; + max_dma = -1; + + tb_sw_dbg(sw, "credit allocation parameters:\n"); + + for (i = 0; i < length; i++) { + u16 index, value; + + index = data[i] & USB4_BA_INDEX_MASK; + value = (data[i] & USB4_BA_VALUE_MASK) >> USB4_BA_VALUE_SHIFT; + + switch (index) { + case USB4_BA_MAX_USB3: + tb_sw_dbg(sw, " USB3: %u\n", value); + max_usb3 = value; + break; + case USB4_BA_MIN_DP_AUX: + tb_sw_dbg(sw, " DP AUX: %u\n", value); + min_dp_aux = value; + break; + case USB4_BA_MIN_DP_MAIN: + tb_sw_dbg(sw, " DP main: %u\n", value); + min_dp_main = value; + break; + case USB4_BA_MAX_PCIE: + tb_sw_dbg(sw, " PCIe: %u\n", value); + max_pcie = value; + break; + case USB4_BA_MAX_HI: + tb_sw_dbg(sw, " DMA: %u\n", value); + max_dma = value; + break; + default: + tb_sw_dbg(sw, " unknown credit allocation index %#x, skipping\n", + index); + break; + } + } + + /* + * Validate the buffer allocation preferences. If we find + * issues, log a warning and fall back using the hard-coded + * values. + */ + + /* Host router must report baMaxHI */ + if (!tb_route(sw) && max_dma < 0) { + tb_sw_warn(sw, "host router is missing baMaxHI\n"); + goto err_invalid; + } + + nports = 0; + tb_switch_for_each_port(sw, port) { + if (tb_port_is_null(port)) + nports++; + } + + /* Must have DP buffer allocation (multiple USB4 ports) */ + if (nports > 2 && (min_dp_aux < 0 || min_dp_main < 0)) { + tb_sw_warn(sw, "multiple USB4 ports require baMinDPaux/baMinDPmain\n"); + goto err_invalid; + } + + tb_switch_for_each_port(sw, port) { + if (tb_port_is_dpout(port) && min_dp_main < 0) { + tb_sw_warn(sw, "missing baMinDPmain"); + goto err_invalid; + } + if ((tb_port_is_dpin(port) || tb_port_is_dpout(port)) && + min_dp_aux < 0) { + tb_sw_warn(sw, "missing baMinDPaux"); + goto err_invalid; + } + if ((tb_port_is_usb3_down(port) || tb_port_is_usb3_up(port)) && + max_usb3 < 0) { + tb_sw_warn(sw, "missing baMaxUSB3"); + goto err_invalid; + } + if ((tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) && + max_pcie < 0) { + tb_sw_warn(sw, "missing baMaxPCIe"); + goto err_invalid; + } + } + + /* + * Buffer allocation passed the validation so we can use it in + * path creation. + */ + sw->credit_allocation = true; + if (max_usb3 > 0) + sw->max_usb3_credits = max_usb3; + if (min_dp_aux > 0) + sw->min_dp_aux_credits = min_dp_aux; + if (min_dp_main > 0) + sw->min_dp_main_credits = min_dp_main; + if (max_pcie > 0) + sw->max_pcie_credits = max_pcie; + if (max_dma > 0) + sw->max_dma_credits = max_dma; + + return 0; + +err_invalid: + return -EINVAL; +} + /** * usb4_switch_query_dp_resource() - Query availability of DP IN resource * @sw: USB4 router From patchwork Tue May 18 14:09:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 441645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C14CDC433ED for ; Tue, 18 May 2021 14:09:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A22B3611BD for ; Tue, 18 May 2021 14:09:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349817AbhEROLO (ORCPT ); Tue, 18 May 2021 10:11:14 -0400 Received: from mga04.intel.com ([192.55.52.120]:3122 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349818AbhEROLN (ORCPT ); Tue, 18 May 2021 10:11:13 -0400 IronPort-SDR: H9Ao/FJ8camnERYpBtA67TUnTGL2tfuCLmBKexbzXsVlad6FZ6l7UTh0wdEptctoHPanv0WNn9 Jtufs/+wZ38g== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="198769251" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="198769251" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:46 -0700 IronPort-SDR: i6/PtZcBcSBmBQw35j70UHgioyh1mJJVpzEGYqhDEiALTJQKAjBjT6kpeFLoDSsnywma4ZXZfF 14ybn/8mESiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="404835755" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 18 May 2021 07:09:44 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id A44C77E6; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 5/8] thunderbolt: Update port credits after bonding is enabled/disabled Date: Tue, 18 May 2021 17:09:59 +0300 Message-Id: <20210518141002.63616-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Once lane bonding has been enabled (or disabled) both lane adapters may update their total credits accordingly. For this reason re-read the port credits after lane bonding has been enabled or disabled. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 48 +++++++++++++++++++++++++++++++++++ drivers/thunderbolt/tb.h | 1 + drivers/thunderbolt/xdomain.c | 2 ++ 3 files changed, 51 insertions(+) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index ac6cb304c49f..e015dc93a916 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -1099,6 +1099,49 @@ int tb_port_wait_for_link_width(struct tb_port *port, int width, return -ETIMEDOUT; } +static int tb_port_do_update_credits(struct tb_port *port) +{ + u32 nfc_credits; + int ret; + + ret = tb_port_read(port, &nfc_credits, TB_CFG_PORT, ADP_CS_4, 1); + if (ret) + return ret; + + if (nfc_credits != port->config.nfc_credits) { + u32 total; + + total = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> + ADP_CS_4_TOTAL_BUFFERS_SHIFT; + + tb_port_dbg(port, "total credits changed %u -> %u\n", + port->total_credits, total); + + port->config.nfc_credits = nfc_credits; + port->total_credits = total; + } + + return 0; +} + +/** + * tb_port_update_credits() - Re-read port total credits + * @port: Port to update + * + * After the link is bonded (or bonding was disabled) the port total + * credits may change, so this function needs to be called to re-read + * the credits. Updates also the second lane adapter. + */ +int tb_port_update_credits(struct tb_port *port) +{ + int ret; + + ret = tb_port_do_update_credits(port); + if (ret) + return ret; + return tb_port_do_update_credits(port->dual_link_port); +} + static int tb_port_start_lane_initialization(struct tb_port *port) { int ret; @@ -2494,6 +2537,8 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) return ret; } + tb_port_update_credits(down); + tb_port_update_credits(up); tb_switch_update_link_attributes(sw); tb_sw_dbg(sw, "lane bonding enabled\n"); @@ -2531,7 +2576,10 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw) if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT) tb_sw_warn(sw, "timeout disabling lane bonding\n"); + tb_port_update_credits(down); + tb_port_update_credits(up); tb_switch_update_link_attributes(sw); + tb_sw_dbg(sw, "lane bonding disabled\n"); } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index a8190009815c..e2f304d4a65d 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -906,6 +906,7 @@ int tb_port_lane_bonding_enable(struct tb_port *port); void tb_port_lane_bonding_disable(struct tb_port *port); int tb_port_wait_for_link_width(struct tb_port *port, int width, int timeout_msec); +int tb_port_update_credits(struct tb_port *port); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 39c2da112238..d66ea4d616fd 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -1533,6 +1533,7 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) return ret; } + tb_port_update_credits(port); tb_xdomain_update_link_attributes(xd); dev_dbg(&xd->dev, "lane bonding enabled\n"); @@ -1557,6 +1558,7 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) tb_port_warn(port, "timeout disabling lane bonding\n"); tb_port_disable(port->dual_link_port); + tb_port_update_credits(port); tb_xdomain_update_link_attributes(xd); dev_dbg(&xd->dev, "lane bonding disabled\n"); From patchwork Tue May 18 14:10:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 441647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 714DEC433B4 for ; Tue, 18 May 2021 14:09:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 564DB611BD for ; Tue, 18 May 2021 14:09:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349807AbhEROLI (ORCPT ); Tue, 18 May 2021 10:11:08 -0400 Received: from mga02.intel.com ([134.134.136.20]:47992 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349775AbhEROLF (ORCPT ); Tue, 18 May 2021 10:11:05 -0400 IronPort-SDR: 5svF55gjMabUb6dmNDXwzPWzcxXBgGpL8L3tR+ch8anSAYvgiAZKeHckpXUnj2PsOpGRmOaZnP f/cQiaeTDaGQ== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="187845580" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="187845580" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:46 -0700 IronPort-SDR: JmCVIuekip3m0/x4Z6I3wyBMMeIQ+7lupNx//bQR1bxbFk8rvXFU2PX1AeZowR8vFRDudu7H9z h2VHuc0ytgkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="611988584" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga005.jf.intel.com with ESMTP; 18 May 2021 07:09:44 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id AF8D6804; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 6/8] thunderbolt: Allocate credits according to router preferences Date: Tue, 18 May 2021 17:10:00 +0300 Message-Id: <20210518141002.63616-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The USB4 Connection Manager guide provides detailed information how the USB4 router buffer (credit) allocation information should be used by the connection manager when it allocates buffers for different paths. This patch implements it for Linux. For USB 3.x and DisplayPort we use directly the router preferences. The rest of the buffer space is then used for PCIe and DMA (peer-to-peer, XDomain) traffic. DMA tunnels require at least one buffer and PCIe six, so if there is not enough buffers we fail the tunnel creation. For the legacy Thunderbolt 1-3 devices we use the existing hard-coded scheme except for DMA where we use the values suggested by the USB4 spec chapter 13. Co-developed-by: Gil Fine Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.h | 14 ++ drivers/thunderbolt/tunnel.c | 404 ++++++++++++++++++++++++++++------- drivers/thunderbolt/tunnel.h | 2 + 3 files changed, 346 insertions(+), 74 deletions(-) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index e2f304d4a65d..89e38aeea52b 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -213,6 +213,8 @@ struct tb_switch { * @list: Used to link ports to DP resources list * @total_credits: Total number of buffers available for this port * @ctl_credits: Buffers reserved for control path + * @dma_credits: Number of credits allocated for DMA tunneling for all + * DMA paths through this port. * * In USB4 terminology this structure represents an adapter (protocol or * lane adapter). @@ -236,6 +238,7 @@ struct tb_port { struct list_head list; unsigned int total_credits; unsigned int ctl_credits; + unsigned int dma_credits; }; /** @@ -941,6 +944,17 @@ bool tb_path_is_invalid(struct tb_path *path); bool tb_path_port_on_path(const struct tb_path *path, const struct tb_port *port); +/** + * tb_path_for_each_hop() - Iterate over each hop on path + * @path: Path whose hops to iterate + * @hop: Hop used as iterator + * + * Iterates over each hop on path. + */ +#define tb_path_for_each_hop(path, hop) \ + for ((hop) = &(path)->hops[0]; \ + (hop) <= &(path)->hops[(path)->path_length - 1]; (hop)++) + int tb_drom_read(struct tb_switch *sw); int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid); diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 5be0f31949f1..bb5cc480fc9a 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -34,6 +34,16 @@ #define TB_DP_AUX_PATH_OUT 1 #define TB_DP_AUX_PATH_IN 2 +/* Minimum number of credits needed for PCIe path */ +#define TB_MIN_PCIE_CREDITS 6U +/* + * Number of credits we try to allocate for each DMA path if not limited + * by the host router baMaxHI. + */ +#define TB_DMA_CREDITS 14U +/* Minimum number of credits for DMA path */ +#define TB_MIN_DMA_CREDITS 1U + static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" }; #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ @@ -57,6 +67,55 @@ static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" }; #define tb_tunnel_dbg(tunnel, fmt, arg...) \ __TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg) +static inline unsigned int tb_usable_credits(const struct tb_port *port) +{ + return port->total_credits - port->ctl_credits; +} + +/** + * tb_available_credits() - Available credits for PCIe and DMA + * @port: Lane adapter to check + * @max_dp_streams: If non-%NULL stores maximum number of simultaneous DP + * streams possible through this lane adapter + */ +static unsigned int tb_available_credits(const struct tb_port *port, + size_t *max_dp_streams) +{ + const struct tb_switch *sw = port->sw; + int credits, usb3, pcie, spare; + size_t ndp; + + usb3 = tb_acpi_may_tunnel_usb3() ? sw->max_usb3_credits : 0; + pcie = tb_acpi_may_tunnel_pcie() ? sw->max_pcie_credits : 0; + + if (tb_acpi_is_xdomain_allowed()) { + spare = min_not_zero(sw->max_dma_credits, TB_DMA_CREDITS); + /* Add some credits for potential second DMA tunnel */ + spare += TB_MIN_DMA_CREDITS; + } else { + spare = 0; + } + + credits = tb_usable_credits(port); + if (tb_acpi_may_tunnel_dp()) { + /* + * Maximum number of DP streams possible through the + * lane adapter. + */ + ndp = (credits - (usb3 + pcie + spare)) / + (sw->min_dp_aux_credits + sw->min_dp_main_credits); + } else { + ndp = 0; + } + credits -= ndp * (sw->min_dp_aux_credits + sw->min_dp_main_credits); + credits -= usb3; + + if (max_dp_streams) + *max_dp_streams = ndp; + + return credits > 0 ? credits : 0; +} + static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths, enum tb_tunnel_type type) { @@ -94,24 +153,37 @@ static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate) return 0; } -static int tb_initial_credits(const struct tb_switch *sw) +static int tb_pci_init_credits(struct tb_path_hop *hop) { - /* If the path is complete sw is not NULL */ - if (sw) { - /* More credits for faster link */ - switch (sw->link_speed * sw->link_width) { - case 40: - return 32; - case 20: - return 24; - } + struct tb_port *port = hop->in_port; + struct tb_switch *sw = port->sw; + unsigned int credits; + + if (tb_port_use_credit_allocation(port)) { + unsigned int available; + + available = tb_available_credits(port, NULL); + credits = min(sw->max_pcie_credits, available); + + if (credits < TB_MIN_PCIE_CREDITS) + return -ENOSPC; + + credits = max(TB_MIN_PCIE_CREDITS, credits); + } else { + if (tb_port_is_null(port)) + credits = port->bonded ? 32 : 16; + else + credits = 7; } - return 16; + hop->initial_credits = credits; + return 0; } -static void tb_pci_init_path(struct tb_path *path) +static int tb_pci_init_path(struct tb_path *path) { + struct tb_path_hop *hop; + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; path->egress_shared_buffer = TB_PATH_NONE; path->ingress_fc_enable = TB_PATH_ALL; @@ -119,10 +191,16 @@ static void tb_pci_init_path(struct tb_path *path) path->priority = 3; path->weight = 1; path->drop_packages = 0; - path->hops[0].initial_credits = 7; - if (path->path_length > 1) - path->hops[1].initial_credits = - tb_initial_credits(path->hops[1].in_port->sw); + + tb_path_for_each_hop(path, hop) { + int ret; + + ret = tb_pci_init_credits(hop); + if (ret) + return ret; + } + + return 0; } /** @@ -162,14 +240,16 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down) goto err_free; } tunnel->paths[TB_PCI_PATH_UP] = path; - tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]); + if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP])) + goto err_free; path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL, "PCIe Down"); if (!path) goto err_deactivate; tunnel->paths[TB_PCI_PATH_DOWN] = path; - tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]); + if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN])) + goto err_deactivate; /* Validate that the tunnel is complete */ if (!tb_port_is_pcie_up(tunnel->dst_port)) { @@ -227,23 +307,25 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up, path = tb_path_alloc(tb, down, TB_PCI_HOPID, up, TB_PCI_HOPID, 0, "PCIe Down"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; - } - tb_pci_init_path(path); + if (!path) + goto err_free; tunnel->paths[TB_PCI_PATH_DOWN] = path; + if (tb_pci_init_path(path)) + goto err_free; path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0, "PCIe Up"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; - } - tb_pci_init_path(path); + if (!path) + goto err_free; tunnel->paths[TB_PCI_PATH_UP] = path; + if (tb_pci_init_path(path)) + goto err_free; return tunnel; + +err_free: + tb_tunnel_free(tunnel); + return NULL; } static bool tb_dp_is_usb4(const struct tb_switch *sw) @@ -598,9 +680,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, return 0; } +static void tb_dp_init_aux_credits(struct tb_path_hop *hop) +{ + struct tb_port *port = hop->in_port; + struct tb_switch *sw = port->sw; + + if (tb_port_use_credit_allocation(port)) + hop->initial_credits = sw->min_dp_aux_credits; + else + hop->initial_credits = 1; +} + static void tb_dp_init_aux_path(struct tb_path *path) { - int i; + struct tb_path_hop *hop; path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; path->egress_shared_buffer = TB_PATH_NONE; @@ -609,13 +702,42 @@ static void tb_dp_init_aux_path(struct tb_path *path) path->priority = 2; path->weight = 1; - for (i = 0; i < path->path_length; i++) - path->hops[i].initial_credits = 1; + tb_path_for_each_hop(path, hop) + tb_dp_init_aux_credits(hop); } -static void tb_dp_init_video_path(struct tb_path *path, bool discover) +static int tb_dp_init_video_credits(struct tb_path_hop *hop) { - int i; + struct tb_port *port = hop->in_port; + struct tb_switch *sw = port->sw; + + if (tb_port_use_credit_allocation(port)) { + unsigned int nfc_credits; + size_t max_dp_streams; + + tb_available_credits(port, &max_dp_streams); + /* + * Read the number of currently allocated NFC credits + * from the lane adapter. Since we only use them for DP + * tunneling we can use that to figure out how many DP + * tunnels already go through the lane adapter. + */ + nfc_credits = port->config.nfc_credits & + ADP_CS_4_NFC_BUFFERS_MASK; + if (nfc_credits / sw->min_dp_main_credits > max_dp_streams) + return -ENOSPC; + + hop->nfc_credits = sw->min_dp_main_credits; + } else { + hop->nfc_credits = min(port->total_credits - 2, 12U); + } + + return 0; +} + +static int tb_dp_init_video_path(struct tb_path *path) +{ + struct tb_path_hop *hop; path->egress_fc_enable = TB_PATH_NONE; path->egress_shared_buffer = TB_PATH_NONE; @@ -624,21 +746,15 @@ static void tb_dp_init_video_path(struct tb_path *path, bool discover) path->priority = 1; path->weight = 1; - for (i = 0; i < path->path_length; i++) { - u32 nfc_credits = path->hops[i].in_port->config.nfc_credits; - - if (discover) { - path->hops[i].nfc_credits = - nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK; - } else { - u32 max_credits; + tb_path_for_each_hop(path, hop) { + int ret; - max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> - ADP_CS_4_TOTAL_BUFFERS_SHIFT; - /* Leave some credits for AUX path */ - path->hops[i].nfc_credits = min(max_credits - 2, 12U); - } + ret = tb_dp_init_video_credits(hop); + if (ret) + return ret; } + + return 0; } /** @@ -678,7 +794,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in) goto err_free; } tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path; - tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true); + if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT])) + goto err_free; path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX"); if (!path) @@ -765,7 +882,7 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, 1, "Video"); if (!path) goto err_free; - tb_dp_init_video_path(path, false); + tb_dp_init_video_path(path); paths[TB_DP_VIDEO_PATH_OUT] = path; path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out, @@ -789,20 +906,58 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, return NULL; } -static u32 tb_dma_credits(struct tb_port *nhi) +static unsigned int tb_dma_available_credits(const struct tb_port *port) { - u32 max_credits; + const struct tb_switch *sw = port->sw; + int credits; + + credits = tb_available_credits(port, NULL); + if (tb_acpi_may_tunnel_pcie()) + credits -= sw->max_pcie_credits; + credits -= port->dma_credits; - max_credits = (nhi->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> - ADP_CS_4_TOTAL_BUFFERS_SHIFT; - return min(max_credits, 13U); + return credits > 0 ? credits : 0; } -static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits) +static int tb_dma_reserve_credits(struct tb_path_hop *hop, unsigned int credits) { - int i; + struct tb_port *port = hop->in_port; + + if (tb_port_use_credit_allocation(port)) { + unsigned int available = tb_dma_available_credits(port); + + /* + * Need to have at least TB_MIN_DMA_CREDITS, otherwise + * DMA path cannot be established. + */ + if (available < TB_MIN_DMA_CREDITS) + return -ENOSPC; + + while (credits > available) + credits--; + + tb_port_dbg(port, "reserving %u credits for DMA path\n", + credits); + + port->dma_credits += credits; + } else { + if (tb_port_is_null(port)) + credits = port->bonded ? 14 : 6; + else + credits = min(port->total_credits, credits); + } + + hop->initial_credits = credits; + return 0; +} + +/* Path from lane adapter to NHI */ +static int tb_dma_init_rx_path(struct tb_path *path, unsigned int credits) +{ + struct tb_path_hop *hop; + unsigned int i, tmp; - path->egress_fc_enable = efc; + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; path->ingress_fc_enable = TB_PATH_ALL; path->egress_shared_buffer = TB_PATH_NONE; path->ingress_shared_buffer = TB_PATH_NONE; @@ -810,8 +965,80 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits path->weight = 1; path->clear_fc = true; - for (i = 0; i < path->path_length; i++) - path->hops[i].initial_credits = credits; + /* + * First lane adapter is the one connected to the remote host. + * We don't tunnel other traffic over this link so can use all + * the credits (except the ones reserved for control traffic). + */ + hop = &path->hops[0]; + tmp = min(tb_usable_credits(hop->in_port), credits); + hop->initial_credits = tmp; + hop->in_port->dma_credits += tmp; + + for (i = 1; i < path->path_length; i++) { + int ret; + + ret = tb_dma_reserve_credits(&path->hops[i], credits); + if (ret) + return ret; + } + + return 0; +} + +/* Path from NHI to lane adapter */ +static int tb_dma_init_tx_path(struct tb_path *path, unsigned int credits) +{ + struct tb_path_hop *hop; + + path->egress_fc_enable = TB_PATH_ALL; + path->ingress_fc_enable = TB_PATH_ALL; + path->egress_shared_buffer = TB_PATH_NONE; + path->ingress_shared_buffer = TB_PATH_NONE; + path->priority = 5; + path->weight = 1; + path->clear_fc = true; + + tb_path_for_each_hop(path, hop) { + int ret; + + ret = tb_dma_reserve_credits(hop, credits); + if (ret) + return ret; + } + + return 0; +} + +static void tb_dma_release_credits(struct tb_path_hop *hop) +{ + struct tb_port *port = hop->in_port; + + if (tb_port_use_credit_allocation(port)) { + port->dma_credits -= hop->initial_credits; + + tb_port_dbg(port, "released %u DMA path credits\n", + hop->initial_credits); + } +} + +static void tb_dma_deinit_path(struct tb_path *path) +{ + struct tb_path_hop *hop; + + tb_path_for_each_hop(path, hop) + tb_dma_release_credits(hop); +} + +static void tb_dma_deinit(struct tb_tunnel *tunnel) +{ + int i; + + for (i = 0; i < tunnel->npaths; i++) { + if (!tunnel->paths[i]) + continue; + tb_dma_deinit_path(tunnel->paths[i]); + } } /** @@ -836,7 +1063,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, struct tb_tunnel *tunnel; size_t npaths = 0, i = 0; struct tb_path *path; - u32 credits; + int credits; if (receive_ring > 0) npaths++; @@ -852,32 +1079,39 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, tunnel->src_port = nhi; tunnel->dst_port = dst; + tunnel->deinit = tb_dma_deinit; - credits = tb_dma_credits(nhi); + credits = min_not_zero(TB_DMA_CREDITS, nhi->sw->max_dma_credits); if (receive_ring > 0) { path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; - } - tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits); + if (!path) + goto err_free; tunnel->paths[i++] = path; + if (tb_dma_init_rx_path(path, credits)) { + tb_tunnel_dbg(tunnel, "not enough buffers for RX path\n"); + goto err_free; + } } if (transmit_ring > 0) { path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; - } - tb_dma_init_path(path, TB_PATH_ALL, credits); + if (!path) + goto err_free; tunnel->paths[i++] = path; + if (tb_dma_init_tx_path(path, credits)) { + tb_tunnel_dbg(tunnel, "not enough buffers for TX path\n"); + goto err_free; + } } return tunnel; + +err_free: + tb_tunnel_free(tunnel); + return NULL; } /** @@ -1071,8 +1305,28 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel, tunnel->allocated_up, tunnel->allocated_down); } +static void tb_usb3_init_credits(struct tb_path_hop *hop) +{ + struct tb_port *port = hop->in_port; + struct tb_switch *sw = port->sw; + unsigned int credits; + + if (tb_port_use_credit_allocation(port)) { + credits = sw->max_usb3_credits; + } else { + if (tb_port_is_null(port)) + credits = port->bonded ? 32 : 16; + else + credits = 7; + } + + hop->initial_credits = credits; +} + static void tb_usb3_init_path(struct tb_path *path) { + struct tb_path_hop *hop; + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; path->egress_shared_buffer = TB_PATH_NONE; path->ingress_fc_enable = TB_PATH_ALL; @@ -1080,10 +1334,9 @@ static void tb_usb3_init_path(struct tb_path *path) path->priority = 3; path->weight = 3; path->drop_packages = 0; - path->hops[0].initial_credits = 7; - if (path->path_length > 1) - path->hops[1].initial_credits = - tb_initial_credits(path->hops[1].in_port->sw); + + tb_path_for_each_hop(path, hop) + tb_usb3_init_credits(hop); } /** @@ -1283,6 +1536,9 @@ void tb_tunnel_free(struct tb_tunnel *tunnel) if (!tunnel) return; + if (tunnel->deinit) + tunnel->deinit(tunnel); + for (i = 0; i < tunnel->npaths; i++) { if (tunnel->paths[i]) tb_path_free(tunnel->paths[i]); diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h index a66994fb4e60..eea14e24f7e0 100644 --- a/drivers/thunderbolt/tunnel.h +++ b/drivers/thunderbolt/tunnel.h @@ -27,6 +27,7 @@ enum tb_tunnel_type { * @paths: All paths required by the tunnel * @npaths: Number of paths in @paths * @init: Optional tunnel specific initialization + * @deinit: Optional tunnel specific de-initialization * @activate: Optional tunnel specific activation/deactivation * @consumed_bandwidth: Return how much bandwidth the tunnel consumes * @release_unused_bandwidth: Release all unused bandwidth @@ -47,6 +48,7 @@ struct tb_tunnel { struct tb_path **paths; size_t npaths; int (*init)(struct tb_tunnel *tunnel); + void (*deinit)(struct tb_tunnel *tunnel); int (*activate)(struct tb_tunnel *tunnel, bool activate); int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up, int *consumed_down); From patchwork Tue May 18 14:10:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 443012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5A3C433B4 for ; Tue, 18 May 2021 14:10:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DD21611BD for ; Tue, 18 May 2021 14:10:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349813AbhEROLR (ORCPT ); Tue, 18 May 2021 10:11:17 -0400 Received: from mga03.intel.com ([134.134.136.65]:2005 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349812AbhEROLN (ORCPT ); Tue, 18 May 2021 10:11:13 -0400 IronPort-SDR: IPmlcFWQxF3cfFbEtYxafgdcxc1as7NsOu3wlZu5pu5fru+rqvwr7YWz5JKC6PfCjxWsbfsmDP tHLj1rBZGCQQ== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="200775185" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="200775185" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:46 -0700 IronPort-SDR: WEHCQRWc3PSnfl+JSRSeJQGGf/Tjm5IKUnDdSsRNIGLnlLZV2e5eRE2/tYUJQH8AN3EgnQuOik KE5y92uNzO8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="541985210" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 18 May 2021 07:09:44 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id BB0058AD; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 7/8] thunderbolt: Add quirk for Intel Goshen Ridge DP credits Date: Tue, 18 May 2021 17:10:01 +0300 Message-Id: <20210518141002.63616-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Intel Goshen Ridge reports wrong DP main credits in NVM 27 and earlier, so add a quirk that fixes it. We also need to expand the quirk table to match on hardware vendor/device IDs too. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/quirks.c | 29 ++++++++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/quirks.c b/drivers/thunderbolt/quirks.c index 57e2978a3c21..8d73bd7fce15 100644 --- a/drivers/thunderbolt/quirks.c +++ b/drivers/thunderbolt/quirks.c @@ -12,7 +12,17 @@ static void quirk_force_power_link(struct tb_switch *sw) sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER; } +static void quirk_dp_credit_allocation(struct tb_switch *sw) +{ + if (sw->credit_allocation && sw->min_dp_main_credits == 56) { + sw->min_dp_main_credits = 18; + tb_sw_dbg(sw, "quirked DP main: %u\n", sw->min_dp_main_credits); + } +} + struct tb_quirk { + u16 hw_vendor_id; + u16 hw_device_id; u16 vendor; u16 device; void (*hook)(struct tb_switch *sw); @@ -20,7 +30,12 @@ struct tb_quirk { static const struct tb_quirk tb_quirks[] = { /* Dell WD19TB supports self-authentication on unplug */ - { 0x00d4, 0xb070, quirk_force_power_link }, + { 0x0000, 0x0000, 0x00d4, 0xb070, quirk_force_power_link }, + /* + * Intel Goshen Ridge NVM 27 and before report wrong number of + * DP buffers. + */ + { 0x8087, 0x0b26, 0x0000, 0x0000, quirk_dp_credit_allocation }, }; /** @@ -36,7 +51,15 @@ void tb_check_quirks(struct tb_switch *sw) for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) { const struct tb_quirk *q = &tb_quirks[i]; - if (sw->device == q->device && sw->vendor == q->vendor) - q->hook(sw); + if (q->hw_vendor_id && q->hw_vendor_id != sw->config.vendor_id) + continue; + if (q->hw_device_id && q->hw_device_id != sw->config.device_id) + continue; + if (q->vendor && q->vendor != sw->vendor) + continue; + if (q->device && q->device != sw->device) + continue; + + q->hook(sw); } } From patchwork Tue May 18 14:10:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 443015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D18A2C43461 for ; Tue, 18 May 2021 14:09:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE385611BD for ; Tue, 18 May 2021 14:09:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349806AbhEROLH (ORCPT ); Tue, 18 May 2021 10:11:07 -0400 Received: from mga11.intel.com ([192.55.52.93]:35580 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349803AbhEROLE (ORCPT ); Tue, 18 May 2021 10:11:04 -0400 IronPort-SDR: soxXnFe+aDkH2xgKlMem2FjzR+fWzsH/i2N7KzQzjWr7Euj0WCTixOc9amJ5Sx/XYP3uzkqNE+ h03JxlOutqSQ== X-IronPort-AV: E=McAfee;i="6200,9189,9987"; a="197633019" X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="197633019" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2021 07:09:46 -0700 IronPort-SDR: ZgdswECWlsBg6/ut9PAd0nGexpdN8SIPw1UNefRZpFE2Cq6Pw+r0ekqdJFzZbug8IzEAbJV5iX P3NDvFubWK2w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; d="scan'208";a="627076867" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 18 May 2021 07:09:44 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id C79E48F2; Tue, 18 May 2021 17:10:02 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Gil Fine , Casey G Bowman , Andreas Noever , Lukas Wunner , Mika Westerberg Subject: [PATCH 8/8] thunderbolt: Add KUnit tests for credit allocation Date: Tue, 18 May 2021 17:10:02 +0300 Message-Id: <20210518141002.63616-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210518141002.63616-1-mika.westerberg@linux.intel.com> References: <20210518141002.63616-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This adds a couple of KUnit tests for USB4 credit allocation. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/test.c | 545 +++++++++++++++++++++++++++++++++++++ 1 file changed, 545 insertions(+) diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c index 5ff5a03bc9ce..cf34c1ecf5d5 100644 --- a/drivers/thunderbolt/test.c +++ b/drivers/thunderbolt/test.c @@ -87,22 +87,30 @@ static struct tb_switch *alloc_host(struct kunit *test) sw->ports[1].config.type = TB_TYPE_PORT; sw->ports[1].config.max_in_hop_id = 19; sw->ports[1].config.max_out_hop_id = 19; + sw->ports[1].total_credits = 60; + sw->ports[1].ctl_credits = 2; sw->ports[1].dual_link_port = &sw->ports[2]; sw->ports[2].config.type = TB_TYPE_PORT; sw->ports[2].config.max_in_hop_id = 19; sw->ports[2].config.max_out_hop_id = 19; + sw->ports[2].total_credits = 60; + sw->ports[2].ctl_credits = 2; sw->ports[2].dual_link_port = &sw->ports[1]; sw->ports[2].link_nr = 1; sw->ports[3].config.type = TB_TYPE_PORT; sw->ports[3].config.max_in_hop_id = 19; sw->ports[3].config.max_out_hop_id = 19; + sw->ports[3].total_credits = 60; + sw->ports[3].ctl_credits = 2; sw->ports[3].dual_link_port = &sw->ports[4]; sw->ports[4].config.type = TB_TYPE_PORT; sw->ports[4].config.max_in_hop_id = 19; sw->ports[4].config.max_out_hop_id = 19; + sw->ports[4].total_credits = 60; + sw->ports[4].ctl_credits = 2; sw->ports[4].dual_link_port = &sw->ports[3]; sw->ports[4].link_nr = 1; @@ -143,6 +151,25 @@ static struct tb_switch *alloc_host(struct kunit *test) return sw; } +static struct tb_switch *alloc_host_usb4(struct kunit *test) +{ + struct tb_switch *sw; + + sw = alloc_host(test); + if (!sw) + return NULL; + + sw->generation = 4; + sw->credit_allocation = true; + sw->max_usb3_credits = 32; + sw->min_dp_aux_credits = 1; + sw->min_dp_main_credits = 0; + sw->max_pcie_credits = 64; + sw->max_dma_credits = 14; + + return sw; +} + static struct tb_switch *alloc_dev_default(struct kunit *test, struct tb_switch *parent, u64 route, bool bonded) @@ -164,44 +191,60 @@ static struct tb_switch *alloc_dev_default(struct kunit *test, sw->ports[1].config.type = TB_TYPE_PORT; sw->ports[1].config.max_in_hop_id = 19; sw->ports[1].config.max_out_hop_id = 19; + sw->ports[1].total_credits = 60; + sw->ports[1].ctl_credits = 2; sw->ports[1].dual_link_port = &sw->ports[2]; sw->ports[2].config.type = TB_TYPE_PORT; sw->ports[2].config.max_in_hop_id = 19; sw->ports[2].config.max_out_hop_id = 19; + sw->ports[2].total_credits = 60; + sw->ports[2].ctl_credits = 2; sw->ports[2].dual_link_port = &sw->ports[1]; sw->ports[2].link_nr = 1; sw->ports[3].config.type = TB_TYPE_PORT; sw->ports[3].config.max_in_hop_id = 19; sw->ports[3].config.max_out_hop_id = 19; + sw->ports[3].total_credits = 60; + sw->ports[3].ctl_credits = 2; sw->ports[3].dual_link_port = &sw->ports[4]; sw->ports[4].config.type = TB_TYPE_PORT; sw->ports[4].config.max_in_hop_id = 19; sw->ports[4].config.max_out_hop_id = 19; + sw->ports[4].total_credits = 60; + sw->ports[4].ctl_credits = 2; sw->ports[4].dual_link_port = &sw->ports[3]; sw->ports[4].link_nr = 1; sw->ports[5].config.type = TB_TYPE_PORT; sw->ports[5].config.max_in_hop_id = 19; sw->ports[5].config.max_out_hop_id = 19; + sw->ports[5].total_credits = 60; + sw->ports[5].ctl_credits = 2; sw->ports[5].dual_link_port = &sw->ports[6]; sw->ports[6].config.type = TB_TYPE_PORT; sw->ports[6].config.max_in_hop_id = 19; sw->ports[6].config.max_out_hop_id = 19; + sw->ports[6].total_credits = 60; + sw->ports[6].ctl_credits = 2; sw->ports[6].dual_link_port = &sw->ports[5]; sw->ports[6].link_nr = 1; sw->ports[7].config.type = TB_TYPE_PORT; sw->ports[7].config.max_in_hop_id = 19; sw->ports[7].config.max_out_hop_id = 19; + sw->ports[7].total_credits = 60; + sw->ports[7].ctl_credits = 2; sw->ports[7].dual_link_port = &sw->ports[8]; sw->ports[8].config.type = TB_TYPE_PORT; sw->ports[8].config.max_in_hop_id = 19; sw->ports[8].config.max_out_hop_id = 19; + sw->ports[8].total_credits = 60; + sw->ports[8].ctl_credits = 2; sw->ports[8].dual_link_port = &sw->ports[7]; sw->ports[8].link_nr = 1; @@ -265,9 +308,13 @@ static struct tb_switch *alloc_dev_default(struct kunit *test, if (bonded) { /* Bonding is used */ port->bonded = true; + port->total_credits *= 2; port->dual_link_port->bonded = true; + port->dual_link_port->total_credits = 0; upstream_port->bonded = true; + upstream_port->total_credits *= 2; upstream_port->dual_link_port->bonded = true; + upstream_port->dual_link_port->total_credits = 0; } return sw; @@ -294,6 +341,27 @@ static struct tb_switch *alloc_dev_with_dpin(struct kunit *test, return sw; } +static struct tb_switch *alloc_dev_usb4(struct kunit *test, + struct tb_switch *parent, + u64 route, bool bonded) +{ + struct tb_switch *sw; + + sw = alloc_dev_default(test, parent, route, bonded); + if (!sw) + return NULL; + + sw->generation = 4; + sw->credit_allocation = true; + sw->max_usb3_credits = 14; + sw->min_dp_aux_credits = 1; + sw->min_dp_main_credits = 18; + sw->max_pcie_credits = 32; + sw->max_dma_credits = 14; + + return sw; +} + static void tb_test_path_basic(struct kunit *test) { struct tb_port *src_port, *dst_port, *p; @@ -1829,6 +1897,475 @@ static void tb_test_tunnel_dma_match(struct kunit *test) tb_tunnel_free(tunnel); } +static void tb_test_credit_alloc_legacy_not_bonded(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *up, *down; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host(test); + dev = alloc_dev_default(test, host, 0x1, false); + + down = &host->ports[8]; + up = &dev->ports[9]; + tunnel = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U); + + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_legacy_bonded(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *up, *down; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host(test); + dev = alloc_dev_default(test, host, 0x1, true); + + down = &host->ports[8]; + up = &dev->ports[9]; + tunnel = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_pcie(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *up, *down; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + down = &host->ports[8]; + up = &dev->ports[9]; + tunnel = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_dp(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *in, *out; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + in = &host->ports[5]; + out = &dev->ports[14]; + + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); + + /* Video (main) path */ + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); + + /* AUX TX */ + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + /* AUX RX */ + path = tunnel->paths[2]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_usb3(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *up, *down; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + down = &host->ports[12]; + up = &dev->ports[16]; + tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_dma(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + struct tb_path *path; + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + nhi = &host->ports[7]; + port = &dev->ports[3]; + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + + /* DMA RX */ + path = tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + /* DMA TX */ + path = tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + tb_tunnel_free(tunnel); +} + +static void tb_test_credit_alloc_dma_multiple(struct kunit *test) +{ + struct tb_tunnel *tunnel1, *tunnel2, *tunnel3; + struct tb_switch *host, *dev; + struct tb_port *nhi, *port; + struct tb_path *path; + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + nhi = &host->ports[7]; + port = &dev->ports[3]; + + /* + * Create three DMA tunnels through the same ports. With the + * default buffers we should be able to create two and the last + * one fails. + * + * For default host we have following buffers for DMA: + * + * 120 - (2 + 2 * (1 + 0) + 32 + 64 + spare) = 20 + * + * For device we have following: + * + * 120 - (2 + 2 * (1 + 18) + 14 + 32 + spare) = 34 + * + * spare = 14 + 1 = 15 + * + * So on host the first tunnel gets 14 and the second gets the + * remaining 1 and then we run out of buffers. + */ + tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); + KUNIT_ASSERT_TRUE(test, tunnel1 != NULL); + KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2); + + path = tunnel1->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + path = tunnel1->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2); + KUNIT_ASSERT_TRUE(test, tunnel2 != NULL); + KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2); + + path = tunnel2->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + path = tunnel2->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3); + KUNIT_ASSERT_TRUE(test, tunnel3 == NULL); + + /* + * Release the first DMA tunnel. That should make 14 buffers + * available for the next tunnel. + */ + tb_tunnel_free(tunnel1); + + tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3); + KUNIT_ASSERT_TRUE(test, tunnel3 != NULL); + + path = tunnel3->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + path = tunnel3->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + tb_tunnel_free(tunnel3); + tb_tunnel_free(tunnel2); +} + +static void tb_test_credit_alloc_all(struct kunit *test) +{ + struct tb_port *up, *down, *in, *out, *nhi, *port; + struct tb_tunnel *pcie_tunnel, *dp_tunnel1, *dp_tunnel2, *usb3_tunnel; + struct tb_tunnel *dma_tunnel1, *dma_tunnel2; + struct tb_switch *host, *dev; + struct tb_path *path; + + /* + * Create PCIe, 2 x DP, USB 3.x and two DMA tunnels from host to + * device. Expectation is that all these can be established with + * the default credit allocation found in Intel hardware. + */ + + host = alloc_host_usb4(test); + dev = alloc_dev_usb4(test, host, 0x1, true); + + down = &host->ports[8]; + up = &dev->ports[9]; + pcie_tunnel = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, pcie_tunnel != NULL); + KUNIT_ASSERT_EQ(test, pcie_tunnel->npaths, (size_t)2); + + path = pcie_tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + path = pcie_tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U); + + in = &host->ports[5]; + out = &dev->ports[13]; + + dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, dp_tunnel1 != NULL); + KUNIT_ASSERT_EQ(test, dp_tunnel1->npaths, (size_t)3); + + path = dp_tunnel1->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); + + path = dp_tunnel1->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + path = dp_tunnel1->paths[2]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + in = &host->ports[6]; + out = &dev->ports[14]; + + dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, dp_tunnel2 != NULL); + KUNIT_ASSERT_EQ(test, dp_tunnel2->npaths, (size_t)3); + + path = dp_tunnel2->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); + + path = dp_tunnel2->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + path = dp_tunnel2->paths[2]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + down = &host->ports[12]; + up = &dev->ports[16]; + usb3_tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); + KUNIT_ASSERT_TRUE(test, usb3_tunnel != NULL); + KUNIT_ASSERT_EQ(test, usb3_tunnel->npaths, (size_t)2); + + path = usb3_tunnel->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + path = usb3_tunnel->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); + + nhi = &host->ports[7]; + port = &dev->ports[3]; + + dma_tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); + KUNIT_ASSERT_TRUE(test, dma_tunnel1 != NULL); + KUNIT_ASSERT_EQ(test, dma_tunnel1->npaths, (size_t)2); + + path = dma_tunnel1->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + path = dma_tunnel1->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); + + dma_tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2); + KUNIT_ASSERT_TRUE(test, dma_tunnel2 != NULL); + KUNIT_ASSERT_EQ(test, dma_tunnel2->npaths, (size_t)2); + + path = dma_tunnel2->paths[0]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + path = dma_tunnel2->paths[1]; + KUNIT_ASSERT_EQ(test, path->path_length, 2); + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); + + tb_tunnel_free(dma_tunnel2); + tb_tunnel_free(dma_tunnel1); + tb_tunnel_free(usb3_tunnel); + tb_tunnel_free(dp_tunnel2); + tb_tunnel_free(dp_tunnel1); + tb_tunnel_free(pcie_tunnel); +} + static const u32 root_directory[] = { 0x55584401, /* "UXD" v1 */ 0x00000018, /* Root directory length */ @@ -2105,6 +2642,14 @@ static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_tunnel_dma_tx), KUNIT_CASE(tb_test_tunnel_dma_chain), KUNIT_CASE(tb_test_tunnel_dma_match), + KUNIT_CASE(tb_test_credit_alloc_legacy_not_bonded), + KUNIT_CASE(tb_test_credit_alloc_legacy_bonded), + KUNIT_CASE(tb_test_credit_alloc_pcie), + KUNIT_CASE(tb_test_credit_alloc_dp), + KUNIT_CASE(tb_test_credit_alloc_usb3), + KUNIT_CASE(tb_test_credit_alloc_dma), + KUNIT_CASE(tb_test_credit_alloc_dma_multiple), + KUNIT_CASE(tb_test_credit_alloc_all), KUNIT_CASE(tb_test_property_parse), KUNIT_CASE(tb_test_property_format), KUNIT_CASE(tb_test_property_copy),