From patchwork Mon Jun 15 14:26:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43E06C433E0 for ; Mon, 15 Jun 2020 14:26:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B9E4206B7 for ; Mon, 15 Jun 2020 14:26:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730500AbgFOO0y (ORCPT ); Mon, 15 Jun 2020 10:26:54 -0400 Received: from mga09.intel.com ([134.134.136.24]:59953 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729243AbgFOO0u (ORCPT ); Mon, 15 Jun 2020 10:26:50 -0400 IronPort-SDR: aI7uARs1On4bD5BLcIWOKgb+qMf6OluQqD9HwF0Qs6o6PxytDdwxThJ0gSmdSqxVVmLWhI5JaK dE8CqSr9LJ6A== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:49 -0700 IronPort-SDR: m17V032QInMJDwB1Sp9Jshy9Nz5fOeO6/ApDZ/1VXHkC/jJRDKYnL7ONcKZFhxQvy341VnU6Y8 Ap8wN6s2yMsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="262736087" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 15 Jun 2020 07:26:47 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id C37BA217; Mon, 15 Jun 2020 17:26:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 02/17] thunderbolt: Make tb_next_port_on_path() work with tree topologies Date: Mon, 15 Jun 2020 17:26:30 +0300 Message-Id: <20200615142645.56209-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 makes it possible to have tree topology of devices connected in the same way than USB3. This was actually possible in Thunderbolt 1, 2 and 3 as well but all the available devices only had two ports which allows building only daisy-chains of devices. With USB4 it is possible for example that there is DP IN adapter as part of eGPU device router and that should be tunneled over the tree topology to a DP OUT adapter. This updates the tb_next_port_on_path() to support such topologies. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 95b75a712ade..29db484d2c74 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -850,6 +850,13 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid) ida_simple_remove(&port->out_hopids, hopid); } +static inline bool tb_switch_is_reachable(const struct tb_switch *parent, + const struct tb_switch *sw) +{ + u64 mask = (1ULL << parent->config.depth * 8) - 1; + return (tb_route(parent) & mask) == (tb_route(sw) & mask); +} + /** * tb_next_port_on_path() - Return next port for given port on a path * @start: Start port of the walk @@ -879,12 +886,12 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, return end; } - if (start->sw->config.depth < end->sw->config.depth) { + if (tb_switch_is_reachable(prev->sw, end->sw)) { + next = tb_port_at(tb_route(end->sw), prev->sw); + /* Walk down the topology if next == prev */ if (prev->remote && - prev->remote->sw->config.depth > prev->sw->config.depth) + (next == prev || next->dual_link_port == prev)) next = prev->remote; - else - next = tb_port_at(tb_route(end->sw), prev->sw); } else { if (tb_is_upstream_port(prev)) { next = prev->remote; @@ -901,7 +908,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, } } - return next; + return next != prev ? next : NULL; } static int tb_port_get_link_speed(struct tb_port *port) From patchwork Mon Jun 15 14:26:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 904C1C433DF for ; Mon, 15 Jun 2020 14:26:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A77320739 for ; Mon, 15 Jun 2020 14:26:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730436AbgFOO0u (ORCPT ); Mon, 15 Jun 2020 10:26:50 -0400 Received: from mga11.intel.com ([192.55.52.93]:39942 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730111AbgFOO0t (ORCPT ); Mon, 15 Jun 2020 10:26:49 -0400 IronPort-SDR: 4UsMNSGmILWWgZCWH6mGpOBqgT8bI6ZHUp/zUgr+c9ZzLNqyViuhZ5dwcH9Z/lMxVDeZDoZVu4 Wv9EIOQ3B4+g== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:49 -0700 IronPort-SDR: m/gwxkfInxKzGEI0RWEJFECNFrm8VKHg4dEn7yfCjrqg9cMlqQ55dCK8tYDpfkUrdUGN1Jb1k/ JRqh0dOc2SNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="261801668" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 15 Jun 2020 07:26:46 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id CA91A298; Mon, 15 Jun 2020 17:26:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 03/17] thunderbolt: Make tb_path_alloc() work with tree topologies Date: Mon, 15 Jun 2020 17:26:31 +0300 Message-Id: <20200615142645.56209-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org With USB4, topologies are not limited to daisy-chains anymore so when calculating how many hops are between two ports we need to walk the whole path instead. Add helper function tb_for_each_port_on_path() that can be used to walk over each port on a path and make tb_path_alloc() to use it. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/path.c | 12 ++++++------ drivers/thunderbolt/tb.h | 12 ++++++++++++ 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c index ad58559ea88e..77abb1fa80c0 100644 --- a/drivers/thunderbolt/path.c +++ b/drivers/thunderbolt/path.c @@ -239,12 +239,12 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, if (!path) return NULL; - /* - * Number of hops on a path is the distance between the two - * switches plus the source adapter port. - */ - num_hops = abs(tb_route_length(tb_route(src->sw)) - - tb_route_length(tb_route(dst->sw))) + 1; + i = 0; + tb_for_each_port_on_path(src, dst, in_port) + i++; + + /* Each hop takes two ports */ + num_hops = i / 2; path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL); if (!path->hops) { diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 2eb2bcd3cca3..6916168e2c76 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -741,6 +741,18 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid); struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, struct tb_port *prev); +/** + * tb_for_each_port_on_path() - Iterate over each port on path + * @src: Source port + * @dst: Destination port + * @p: Port used as iterator + * + * Walks over each port on path from @src to @dst. + */ +#define tb_for_each_port_on_path(src, dst, p) \ + for ((p) = tb_next_port_on_path((src), (dst), NULL); (p); \ + (p) = tb_next_port_on_path((src), (dst), (p))) + int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap); From patchwork Mon Jun 15 14:26:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D69E5C433E0 for ; Mon, 15 Jun 2020 14:27:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B84AD206B7 for ; Mon, 15 Jun 2020 14:27:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730212AbgFOO1B (ORCPT ); Mon, 15 Jun 2020 10:27:01 -0400 Received: from mga17.intel.com ([192.55.52.151]:19757 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729825AbgFOO0v (ORCPT ); Mon, 15 Jun 2020 10:26:51 -0400 IronPort-SDR: 7pWJaXAj34eG+Cxm8gVRS09+C/4dsCzTdnp+AV2uvfdS+8wOXUEwL5UGxhA1zlLbnTRpj0dJhz 1TeET2S/HFyg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:51 -0700 IronPort-SDR: GV1/t2glX4aiMygaLRkPBYF0F3VghuuhhAncz5K4P6AhoyWwbXMRj3DKzj9tIoj839cWTmE6by GPt55nsNbqLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="316922161" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 15 Jun 2020 07:26:49 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id D93CB1C8; Mon, 15 Jun 2020 17:26:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 05/17] thunderbolt: Handle incomplete PCIe/USB3 paths correctly in discovery Date: Mon, 15 Jun 2020 17:26:33 +0300 Message-Id: <20200615142645.56209-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org If the path is not complete when we do discovery the number of hops may be less than with the full path. As an example when this can happen is that user unloads the driver, disconnects the topology, and loads the driver back. If there is PCIe or USB3 tunnel involved this may happen. Take this into account in tb_pcie_init_path() and tb_usb3_init_path() and prevent potential access over array limits. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index c144ca9b032c..5bdb8b11345e 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -124,8 +124,9 @@ static void tb_pci_init_path(struct tb_path *path) path->drop_packages = 0; path->nfc_credits = 0; path->hops[0].initial_credits = 7; - path->hops[1].initial_credits = - tb_initial_credits(path->hops[1].in_port->sw); + if (path->path_length > 1) + path->hops[1].initial_credits = + tb_initial_credits(path->hops[1].in_port->sw); } /** @@ -879,8 +880,9 @@ static void tb_usb3_init_path(struct tb_path *path) path->drop_packages = 0; path->nfc_credits = 0; path->hops[0].initial_credits = 7; - path->hops[1].initial_credits = - tb_initial_credits(path->hops[1].in_port->sw); + if (path->path_length > 1) + path->hops[1].initial_credits = + tb_initial_credits(path->hops[1].in_port->sw); } /** From patchwork Mon Jun 15 14:26:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6198C433E0 for ; Mon, 15 Jun 2020 14:27:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9ACC6206B7 for ; Mon, 15 Jun 2020 14:27:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730545AbgFOO04 (ORCPT ); Mon, 15 Jun 2020 10:26:56 -0400 Received: from mga05.intel.com ([192.55.52.43]:1143 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730477AbgFOO0x (ORCPT ); Mon, 15 Jun 2020 10:26:53 -0400 IronPort-SDR: nH5YpFUcPMoAt5RJbEth/EtTNLnfhVHebCVBphczRNZqWgu8wnlj9S+N3s2PRQPPqo74nF9bkF liBnADIalENw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:52 -0700 IronPort-SDR: SyujC+jAHDPF9xWhUD+u6s2uOZ31TRN89FdDCvBKdfcxRSrgcsOnN9w0ZsAhnh8uBI8kR98cXg 0h86fmNtJLdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="449412038" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga005.jf.intel.com with ESMTP; 15 Jun 2020 07:26:49 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id E54E35FC; Mon, 15 Jun 2020 17:26:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 06/17] thunderbolt: Increase path length in discovery Date: Mon, 15 Jun 2020 17:26:34 +0300 Message-Id: <20200615142645.56209-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently we have only supported paths that follow daisy-chain topology but USB4 also allows to build trees of devices. For this reason increase maximum path length we use for discovery to be from the lowest level to the host router and back to the same level. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 6916168e2c76..b53ef5be7263 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -286,7 +286,11 @@ struct tb_path { /* HopIDs 0-7 are reserved by the Thunderbolt protocol */ #define TB_PATH_MIN_HOPID 8 -#define TB_PATH_MAX_HOPS 7 +/* + * Support paths from the farthest (depth 6) router to the host and back + * to the same level (not necessarily to the same router). + */ +#define TB_PATH_MAX_HOPS (7 * 2) /** * struct tb_cm_ops - Connection manager specific operations vector From patchwork Mon Jun 15 14:26:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9BD3C433E1 for ; Mon, 15 Jun 2020 14:26:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A21E1206B7 for ; Mon, 15 Jun 2020 14:26:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730551AbgFOO05 (ORCPT ); Mon, 15 Jun 2020 10:26:57 -0400 Received: from mga09.intel.com ([134.134.136.24]:59958 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730490AbgFOO0y (ORCPT ); Mon, 15 Jun 2020 10:26:54 -0400 IronPort-SDR: xg1ZQCQWhTShDqTXdzNakTUJMSm3CotljZGZByz7nFMQEUEyoDWYLAV782dW/Fw6eOdUWIP5OC H5K/YWTdc6cQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:52 -0700 IronPort-SDR: ZXfObPn9r75jlt36TUblFb5yhaowmj6Z472RBHAJD8Rhgb4hG6NCgJ3+50OXlMq/YYo9gDNcua hQ/BstIc4WgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="262736097" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 15 Jun 2020 07:26:49 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 1181B707; Mon, 15 Jun 2020 17:26:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 09/17] thunderbolt: Do not tunnel USB3 if link is not USB4 Date: Mon, 15 Jun 2020 17:26:37 +0300 Message-Id: <20200615142645.56209-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB3 tunneling is possible only over USB4 link so don't create USB3 tunnels if that's not the case. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 3 +++ drivers/thunderbolt/tb.h | 2 ++ drivers/thunderbolt/tb_regs.h | 1 + drivers/thunderbolt/usb4.c | 24 +++++++++++++++++++++--- 4 files changed, 27 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 55daa7f1a87d..2da82259e77c 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -235,6 +235,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw) if (!up) return 0; + if (!sw->link_usb4) + return 0; + /* * Look up available down port. Since we are chaining it should * be found right above this switch. diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index b53ef5be7263..de8124949eaf 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -97,6 +97,7 @@ struct tb_switch_tmu { * @device_name: Name of the device (or %NULL if not known) * @link_speed: Speed of the link in Gb/s * @link_width: Width of the link (1 or 2) + * @link_usb4: Upstream link is USB4 * @generation: Switch Thunderbolt generation * @cap_plug_events: Offset to the plug events capability (%0 if not found) * @cap_lc: Offset to the link controller capability (%0 if not found) @@ -136,6 +137,7 @@ struct tb_switch { const char *device_name; unsigned int link_speed; unsigned int link_width; + bool link_usb4; unsigned int generation; int cap_plug_events; int cap_lc; diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index c29c5075525a..77d4b8598835 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -290,6 +290,7 @@ struct tb_regs_port_header { /* USB4 port registers */ #define PORT_CS_18 0x12 #define PORT_CS_18_BE BIT(8) +#define PORT_CS_18_TCM BIT(9) #define PORT_CS_19 0x13 #define PORT_CS_19_PC BIT(3) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 50c7534ba31e..393771d50962 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -192,6 +192,20 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status) return 0; } +static bool link_is_usb4(struct tb_port *port) +{ + u32 val; + + if (!port->cap_usb4) + return false; + + if (tb_port_read(port, &val, TB_CFG_PORT, + port->cap_usb4 + PORT_CS_18, 1)) + return false; + + return !(val & PORT_CS_18_TCM); +} + /** * usb4_switch_setup() - Additional setup for USB4 device * @sw: USB4 router to setup @@ -205,6 +219,7 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status) */ int usb4_switch_setup(struct tb_switch *sw) { + struct tb_port *downstream_port; struct tb_switch *parent; bool tbt3, xhci; u32 val = 0; @@ -217,6 +232,11 @@ int usb4_switch_setup(struct tb_switch *sw) if (ret) return ret; + parent = tb_switch_parent(sw); + downstream_port = tb_port_at(tb_route(sw), parent); + sw->link_usb4 = link_is_usb4(downstream_port); + tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3"); + xhci = val & ROUTER_CS_6_HCI; tbt3 = !(val & ROUTER_CS_6_TNS); @@ -227,9 +247,7 @@ int usb4_switch_setup(struct tb_switch *sw) if (ret) return ret; - parent = tb_switch_parent(sw); - - if (tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) { + if (sw->link_usb4 && tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) { val |= ROUTER_CS_5_UTO; xhci = false; } From patchwork Mon Jun 15 14:26:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE6BC433DF for ; Mon, 15 Jun 2020 14:26:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5E02206B7 for ; Mon, 15 Jun 2020 14:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730550AbgFOO05 (ORCPT ); Mon, 15 Jun 2020 10:26:57 -0400 Received: from mga03.intel.com ([134.134.136.65]:37309 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730491AbgFOO0y (ORCPT ); Mon, 15 Jun 2020 10:26:54 -0400 IronPort-SDR: 1J0ZvURDy6ARPyF90XENn+peMtlJjcgLJlJTicOpFHZiu0E0rei59z1+f+eZnTwBdmNlzNlHkI GHD50FhTRziw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:53 -0700 IronPort-SDR: 6yxlSMI76GIo3IuyVg4F/fV32PhIpOQZoBIhmFKOth1VImhHw9Ew4t4K8zUR7pdy0cotvGQEbz ED7GY2+i4msA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="351379889" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 15 Jun 2020 07:26:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 2FF8C88D; Mon, 15 Jun 2020 17:26:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 12/17] thunderbolt: Report consumed bandwidth in both directions Date: Mon, 15 Jun 2020 17:26:40 +0300 Message-Id: <20200615142645.56209-13-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Whereas DisplayPort bandwidth is consumed only in one direction (from DP IN adapter to DP OUT adapter), USB3 adds separate bandwidth for both upstream and downstream directions. For this reason extend the tunnel consumed bandwidth routines to support both directions and implement this for DP. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 9 ++++--- drivers/thunderbolt/tunnel.c | 47 +++++++++++++++++++++++++++++------- drivers/thunderbolt/tunnel.h | 6 +++-- 3 files changed, 47 insertions(+), 15 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 9dbdb11685fa..53f9673c1395 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -535,7 +535,7 @@ static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in, { struct tb_switch *sw = out->sw; struct tb_tunnel *tunnel; - int bw, available_bw = 40000; + int ret, bw, available_bw = 40000; while (sw && sw != in->sw) { bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */ @@ -553,9 +553,10 @@ static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in, if (!tb_tunnel_switch_on_path(tunnel, sw)) continue; - consumed_bw = tb_tunnel_consumed_bandwidth(tunnel); - if (consumed_bw < 0) - return consumed_bw; + ret = tb_tunnel_consumed_bandwidth(tunnel, NULL, + &consumed_bw); + if (ret) + return ret; bw -= consumed_bw; } diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 5bdb8b11345e..45f7a50a48ff 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -536,7 +536,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active) return 0; } -static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel) +static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, + int *consumed_down) { struct tb_port *in = tunnel->src_port; const struct tb_switch *sw = in->sw; @@ -580,10 +581,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel) lanes = tb_dp_cap_get_lanes(val); } else { /* No bandwidth management for legacy devices */ + *consumed_up = 0; + *consumed_down = 0; return 0; } - return tb_dp_bandwidth(rate, lanes); + if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) { + *consumed_up = 0; + *consumed_down = tb_dp_bandwidth(rate, lanes); + } else { + *consumed_up = tb_dp_bandwidth(rate, lanes); + *consumed_down = 0; + } + + return 0; } static void tb_dp_init_aux_path(struct tb_path *path) @@ -1174,21 +1185,39 @@ static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel) /** * tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel * @tunnel: Tunnel to check + * @consumed_up: Consumed bandwidth in Mb/s from @dst_port to @src_port. + * Can be %NULL. + * @consumed_down: Consumed bandwidth in Mb/s from @src_port to @dst_port. + * Can be %NULL. * - * Returns bandwidth currently consumed by @tunnel and %0 if the @tunnel - * is not active or does consume bandwidth. + * Stores the amount of isochronous bandwidth @tunnel consumes in + * @consumed_up and @consumed_down. In case of success returns %0, + * negative errno otherwise. */ -int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel) +int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, + int *consumed_down) { + int up_bw = 0, down_bw = 0; + if (!tb_tunnel_is_active(tunnel)) - return 0; + goto out; if (tunnel->consumed_bandwidth) { - int ret = tunnel->consumed_bandwidth(tunnel); + int ret; - tb_tunnel_dbg(tunnel, "consumed bandwidth %d Mb/s\n", ret); - return ret; + ret = tunnel->consumed_bandwidth(tunnel, &up_bw, &down_bw); + if (ret) + return ret; + + tb_tunnel_dbg(tunnel, "consumed bandwidth %d/%d Mb/s\n", up_bw, + down_bw); } +out: + if (consumed_up) + *consumed_up = up_bw; + if (consumed_down) + *consumed_down = down_bw; + return 0; } diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h index 3f5ba93225e7..cc952b2be792 100644 --- a/drivers/thunderbolt/tunnel.h +++ b/drivers/thunderbolt/tunnel.h @@ -42,7 +42,8 @@ struct tb_tunnel { size_t npaths; int (*init)(struct tb_tunnel *tunnel); int (*activate)(struct tb_tunnel *tunnel, bool activate); - int (*consumed_bandwidth)(struct tb_tunnel *tunnel); + int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up, + int *consumed_down); struct list_head list; enum tb_tunnel_type type; unsigned int max_bw; @@ -69,7 +70,8 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel); bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel); bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel, const struct tb_switch *sw); -int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel); +int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, + int *consumed_down); static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel) { From patchwork Mon Jun 15 14:26:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CE99C433E2 for ; Mon, 15 Jun 2020 14:26:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A8D9206B7 for ; Mon, 15 Jun 2020 14:26:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730539AbgFOO0z (ORCPT ); Mon, 15 Jun 2020 10:26:55 -0400 Received: from mga17.intel.com ([192.55.52.151]:19764 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730467AbgFOO0w (ORCPT ); Mon, 15 Jun 2020 10:26:52 -0400 IronPort-SDR: aQbc1V3P1PuaBdsuh4oePeaHAD/vFGFoFhUqlg8ERf+Jf33Rzz36wqMBImH+2zdYQLEnHnHbTA drIJ8rJXAZKA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:52 -0700 IronPort-SDR: t+hMyvD2eNSVf2F27IPryjiR2UFmd/Y+FEo5re6vQ7uGBzWnr9uMhikdLlzPN5yZGcXuoD9c6V s/b72GY3TRPg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="316922166" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 15 Jun 2020 07:26:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3523D87F; Mon, 15 Jun 2020 17:26:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 13/17] thunderbolt: Increase DP DPRX wait timeout Date: Mon, 15 Jun 2020 17:26:41 +0300 Message-Id: <20200615142645.56209-14-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Sometimes it takes longer for DPRX to be set so increase the timeout to cope with this. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 45f7a50a48ff..7896f8b7a69c 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -545,7 +545,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, int ret; if (tb_dp_is_usb4(sw)) { - int timeout = 10; + int timeout = 20; /* * Wait for DPRX done. Normally it should be already set From patchwork Mon Jun 15 14:26:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 214877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58F94C433E0 for ; Mon, 15 Jun 2020 14:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 337CC206B7 for ; Mon, 15 Jun 2020 14:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730147AbgFOO36 (ORCPT ); Mon, 15 Jun 2020 10:29:58 -0400 Received: from mga14.intel.com ([192.55.52.115]:53903 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730529AbgFOO34 (ORCPT ); Mon, 15 Jun 2020 10:29:56 -0400 IronPort-SDR: Rtr/XI8PA2O3K60T15jKRLRZZHheKEicJt4UlwdqwPPuCyb17v5MAW0xbyg1QtpbTMprFm9pth cETg2Gs+Tv1Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2020 07:26:54 -0700 IronPort-SDR: KE6dqrlUM8LLI1HlIn7fY0BrQWaMKu3O49F+weLFpSXxU+3IgsPj/iBygC0pK5wTGRx+wUwV9Z +6V43Q1KJj0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,514,1583222400"; d="scan'208";a="476033294" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 15 Jun 2020 07:26:52 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 5F62E8CA; Mon, 15 Jun 2020 17:26:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Andreas Noever , Michael Jamet , Mika Westerberg , Yehezkel Bernat , Greg Kroah-Hartman , Rajmohan Mani , Lukas Wunner Subject: [PATCH 17/17] thunderbolt: Add KUnit tests for tunneling Date: Mon, 15 Jun 2020 17:26:45 +0300 Message-Id: <20200615142645.56209-18-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.27.0.rc2 In-Reply-To: <20200615142645.56209-1-mika.westerberg@linux.intel.com> References: <20200615142645.56209-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org We can test some parts of tunneling, like path allocation without access to test hardware so add KUnit tests for PCIe, DP and USB3 tunneling. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/test.c | 398 +++++++++++++++++++++++++++++++++++++ 1 file changed, 398 insertions(+) diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c index 9e60bab46d34..acb8b6256847 100644 --- a/drivers/thunderbolt/test.c +++ b/drivers/thunderbolt/test.c @@ -10,6 +10,7 @@ #include #include "tb.h" +#include "tunnel.h" static int __ida_init(struct kunit_resource *res, void *context) { @@ -1203,6 +1204,396 @@ static void tb_test_path_mixed_chain_reverse(struct kunit *test) tb_path_free(path); } +static void tb_test_tunnel_pcie(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2; + struct tb_tunnel *tunnel1, *tunnel2; + struct tb_port *down, *up; + + /* + * Create PCIe tunnel between host and two devices. + * + * [Host] + * 1 | + * 1 | + * [Device #1] + * 5 | + * 1 | + * [Device #2] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x1, true); + dev2 = alloc_dev_default(test, dev1, 0x501, true); + + down = &host->ports[8]; + up = &dev1->ports[9]; + tunnel1 = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, tunnel1 != NULL); + KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_PCI); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up); + KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2); + KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up); + KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down); + + down = &dev1->ports[10]; + up = &dev2->ports[9]; + tunnel2 = tb_tunnel_alloc_pci(NULL, up, down); + KUNIT_ASSERT_TRUE(test, tunnel2 != NULL); + KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_PCI); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up); + KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2); + KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up); + KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down); + + tb_tunnel_free(tunnel2); + tb_tunnel_free(tunnel1); +} + +static void tb_test_tunnel_dp(struct kunit *test) +{ + struct tb_switch *host, *dev; + struct tb_port *in, *out; + struct tb_tunnel *tunnel; + + /* + * Create DP tunnel between Host and Device + * + * [Host] + * 1 | + * 1 | + * [Device] + */ + host = alloc_host(test); + dev = alloc_dev_default(test, host, 0x3, true); + + in = &host->ports[5]; + out = &dev->ports[13]; + + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[1].out_port, in); + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dp_chain(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev4; + struct tb_port *in, *out; + struct tb_tunnel *tunnel; + + /* + * Create DP tunnel from Host DP IN to Device #4 DP OUT. + * + * [Host] + * 1 | + * 1 | + * [Device #1] + * 3 / | 5 \ 7 + * 1 / | \ 1 + * [Device #2] | [Device #4] + * | 1 + * [Device #3] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x1, true); + alloc_dev_default(test, dev1, 0x301, true); + alloc_dev_default(test, dev1, 0x501, true); + dev4 = alloc_dev_default(test, dev1, 0x701, true); + + in = &host->ports[5]; + out = &dev4->ports[14]; + + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 3); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[2].out_port, in); + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dp_tree(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2, *dev3, *dev5; + struct tb_port *in, *out; + struct tb_tunnel *tunnel; + + /* + * Create DP tunnel from Device #2 DP IN to Device #5 DP OUT. + * + * [Host] + * 3 | + * 1 | + * [Device #1] + * 3 / | 5 \ 7 + * 1 / | \ 1 + * [Device #2] | [Device #4] + * | 1 + * [Device #3] + * | 5 + * | 1 + * [Device #5] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x3, true); + dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true); + dev3 = alloc_dev_default(test, dev1, 0x503, true); + alloc_dev_default(test, dev1, 0x703, true); + dev5 = alloc_dev_default(test, dev3, 0x50503, true); + + in = &dev2->ports[13]; + out = &dev5->ports[13]; + + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 4); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[3].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 4); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[3].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 4); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[3].out_port, in); + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dp_max_length(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5, *dev6; + struct tb_switch *dev7, *dev8, *dev9, *dev10, *dev11, *dev12; + struct tb_port *in, *out; + struct tb_tunnel *tunnel; + + /* + * Creates DP tunnel from Device #6 to Device #12. + * + * [Host] + * 1 / \ 3 + * 1 / \ 1 + * [Device #1] [Device #7] + * 3 | | 3 + * 1 | | 1 + * [Device #2] [Device #8] + * 3 | | 3 + * 1 | | 1 + * [Device #3] [Device #9] + * 3 | | 3 + * 1 | | 1 + * [Device #4] [Device #10] + * 3 | | 3 + * 1 | | 1 + * [Device #5] [Device #11] + * 3 | | 3 + * 1 | | 1 + * [Device #6] [Device #12] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x1, true); + dev2 = alloc_dev_default(test, dev1, 0x301, true); + dev3 = alloc_dev_default(test, dev2, 0x30301, true); + dev4 = alloc_dev_default(test, dev3, 0x3030301, true); + dev5 = alloc_dev_default(test, dev4, 0x303030301, true); + dev6 = alloc_dev_with_dpin(test, dev5, 0x30303030301, true); + dev7 = alloc_dev_default(test, host, 0x3, true); + dev8 = alloc_dev_default(test, dev7, 0x303, true); + dev9 = alloc_dev_default(test, dev8, 0x30303, true); + dev10 = alloc_dev_default(test, dev9, 0x3030303, true); + dev11 = alloc_dev_default(test, dev10, 0x303030303, true); + dev12 = alloc_dev_default(test, dev11, 0x30303030303, true); + + in = &dev6->ports[13]; + out = &dev12->ports[13]; + + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 13); + /* First hop */ + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in); + /* Middle */ + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].in_port, + &host->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].out_port, + &host->ports[3]); + /* Last */ + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[12].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 13); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].in_port, + &host->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].out_port, + &host->ports[3]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[12].out_port, out); + KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 13); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].in_port, + &host->ports[3]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].out_port, + &host->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[12].out_port, in); + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_usb3(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2; + struct tb_tunnel *tunnel1, *tunnel2; + struct tb_port *down, *up; + + /* + * Create USB3 tunnel between host and two devices. + * + * [Host] + * 1 | + * 1 | + * [Device #1] + * \ 7 + * \ 1 + * [Device #2] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x1, true); + dev2 = alloc_dev_default(test, dev1, 0x701, true); + + down = &host->ports[12]; + up = &dev1->ports[16]; + tunnel1 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel1 != NULL); + KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_USB3); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up); + KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2); + KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up); + KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down); + + down = &dev1->ports[17]; + up = &dev2->ports[16]; + tunnel2 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel2 != NULL); + KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_USB3); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up); + KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2); + KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up); + KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down); + + tb_tunnel_free(tunnel2); + tb_tunnel_free(tunnel1); +} + +static void tb_test_tunnel_port_on_path(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5; + struct tb_port *in, *out, *port; + struct tb_tunnel *dp_tunnel; + + /* + * [Host] + * 3 | + * 1 | + * [Device #1] + * 3 / | 5 \ 7 + * 1 / | \ 1 + * [Device #2] | [Device #4] + * | 1 + * [Device #3] + * | 5 + * | 1 + * [Device #5] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x3, true); + dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true); + dev3 = alloc_dev_default(test, dev1, 0x503, true); + dev4 = alloc_dev_default(test, dev1, 0x703, true); + dev5 = alloc_dev_default(test, dev3, 0x50503, true); + + in = &dev2->ports[13]; + out = &dev5->ports[13]; + + dp_tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); + KUNIT_ASSERT_TRUE(test, dp_tunnel != NULL); + + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, in)); + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, out)); + + port = &host->ports[8]; + KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &host->ports[3]; + KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev1->ports[1]; + KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev1->ports[3]; + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev1->ports[5]; + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev1->ports[7]; + KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev3->ports[1]; + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev5->ports[1]; + KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + port = &dev4->ports[1]; + KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port)); + + tb_tunnel_free(dp_tunnel); +} + static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_path_basic), KUNIT_CASE(tb_test_path_not_connected_walk), @@ -1218,6 +1609,13 @@ static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_path_not_bonded_lane1_chain_reverse), KUNIT_CASE(tb_test_path_mixed_chain), KUNIT_CASE(tb_test_path_mixed_chain_reverse), + KUNIT_CASE(tb_test_tunnel_pcie), + KUNIT_CASE(tb_test_tunnel_dp), + KUNIT_CASE(tb_test_tunnel_dp_chain), + KUNIT_CASE(tb_test_tunnel_dp_tree), + KUNIT_CASE(tb_test_tunnel_dp_max_length), + KUNIT_CASE(tb_test_tunnel_port_on_path), + KUNIT_CASE(tb_test_tunnel_usb3), { } };