From patchwork Thu Mar 4 12:31:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F396C433DB for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEC0164F2B for ; Thu, 4 Mar 2021 12:34:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240593AbhCDMdW (ORCPT ); Thu, 4 Mar 2021 07:33:22 -0500 Received: from mga01.intel.com ([192.55.52.88]:37596 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240613AbhCDMdO (ORCPT ); Thu, 4 Mar 2021 07:33:14 -0500 IronPort-SDR: CioQHSl8vrKy5M+Aq0fl59k2xZGhEmvxuTV3GonqE+1QFQi8+jOr+SdMtY2EQ7beU+KbSeOPGj rOT7hWErtznQ== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="207113146" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="207113146" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:29 -0800 IronPort-SDR: 5n3gJrn0wB4W3i2igPQKvofZRm9c8dfzD9od3nSAc0eIb3KVouGxW4rM7IxRxjnOWOnhxC6HVh ZgyLh8vxu79A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="407764313" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga008.jf.intel.com with ESMTP; 04 Mar 2021 04:31:26 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3BF3AB8; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 01/18] thunderbolt: Disable retry logic for intra-domain control packets Date: Thu, 4 Mar 2021 15:31:08 +0300 Message-Id: <20210304123125.43630-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In most cases the response packet is lost because the router in question was disconnected by the user. Resending the control packet in that case just adds unnecessary delays, so disable that for intra-domain control packets. For inter-domain (XDomain) packets we continue retrying. This also aligns the driver better what the Intel connection manager firmware is doing. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/ctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index f1aeaff9f368..875922133782 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -17,7 +17,7 @@ #define TB_CTL_RX_PKG_COUNT 10 -#define TB_CTL_RETRIES 4 +#define TB_CTL_RETRIES 1 /** * struct tb_ctl - Thunderbolt control channel From patchwork Thu Mar 4 12:31:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A85E4C4321A for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B02564F35 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240795AbhCDMdZ (ORCPT ); Thu, 4 Mar 2021 07:33:25 -0500 Received: from mga01.intel.com ([192.55.52.88]:37600 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240800AbhCDMdS (ORCPT ); Thu, 4 Mar 2021 07:33:18 -0500 IronPort-SDR: pLoHiX6CDmaXLH5fkJ13FxKALL8Tn8yN1dQfvwKrUjzDK5yWybGORLELAnaqn3dvYcwU/ONdMK 7ZIj7UMCNqZw== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="207113154" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="207113154" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:30 -0800 IronPort-SDR: XVAu/z16xaRzoTojTaJIhMRVbd26cQXJmBVoYdGIeUnGDbVEwHLJQozbrGfKHPB3NrADJyj398 O29AS1TsudSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="406837329" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 04 Mar 2021 04:31:26 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 456AD236; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 02/18] thunderbolt: Do not pass timeout for tb_cfg_reset() Date: Thu, 4 Mar 2021 15:31:09 +0300 Message-Id: <20210304123125.43630-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There is only one user for this function and it passes the default timeout to it anyway, so remove the parameter completely. This is also needed in the subsequent patch where we allow connection manager implementations to use different timeout for non-raw control channel messages. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/ctl.c | 6 ++---- drivers/thunderbolt/ctl.h | 3 +-- drivers/thunderbolt/switch.c | 2 +- 3 files changed, 4 insertions(+), 7 deletions(-) diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index 875922133782..b79be1f02d92 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -802,14 +802,12 @@ static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg) * tb_cfg_reset() - send a reset packet and wait for a response * @ctl: Control channel pointer * @route: Router string for the router to send reset - * @timeout_msec: Timeout in ms how long to wait for the response * * If the switch at route is incorrectly configured then we will not receive a * reply (even though the switch will reset). The caller should check for * -ETIMEDOUT and attempt to reconfigure the switch. */ -struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route, - int timeout_msec) +struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route) { struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) }; struct tb_cfg_result res = { 0 }; @@ -831,7 +829,7 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route, req->response_size = sizeof(reply); req->response_type = TB_CFG_PKG_RESET; - res = tb_cfg_request_sync(ctl, req, timeout_msec); + res = tb_cfg_request_sync(ctl, req, TB_CFG_DEFAULT_TIMEOUT); tb_cfg_request_put(req); diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h index 97cb03b38953..2eafbfea5dff 100644 --- a/drivers/thunderbolt/ctl.h +++ b/drivers/thunderbolt/ctl.h @@ -124,8 +124,7 @@ static inline struct tb_cfg_header tb_cfg_make_header(u64 route) } int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug); -struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route, - int timeout_msec); +struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route); struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer, u64 route, u32 port, enum tb_cfg_space space, u32 offset, diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 2a95b4ce06c0..218869c6ee21 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -1331,7 +1331,7 @@ int tb_switch_reset(struct tb_switch *sw) TB_CFG_SWITCH, 2, 2); if (res.err) return res.err; - res = tb_cfg_reset(sw->tb->ctl, tb_route(sw), TB_CFG_DEFAULT_TIMEOUT); + res = tb_cfg_reset(sw->tb->ctl, tb_route(sw)); if (res.err > 0) return -EIO; return res.err; From patchwork Thu Mar 4 12:31:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD6BFC43603 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B464864F47 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240846AbhCDMdZ (ORCPT ); Thu, 4 Mar 2021 07:33:25 -0500 Received: from mga04.intel.com ([192.55.52.120]:10302 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240797AbhCDMdR (ORCPT ); Thu, 4 Mar 2021 07:33:17 -0500 IronPort-SDR: PL2OKzZDdZoqplpLNB+EicacRX2HVDEG+L+idnE+qvKHHnXpVNHbEmhQ4q+UReOHlOkhp4VxmD i++VpfzGmP0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="184994133" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="184994133" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:29 -0800 IronPort-SDR: SmusOe2hZXijIRsBdqL+bf5HfE5avM4ARXd2xDVV6nYU4cDMXzz0T4ipeO6Xdjr5Y13RbaOSbC WaULhjshLJDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="374534659" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 04:31:26 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 4EB6E29E; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 03/18] thunderbolt: Decrease control channel timeout for software connection manager Date: Thu, 4 Mar 2021 15:31:10 +0300 Message-Id: <20210304123125.43630-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org When the firmware connection manager is not proxying between the software and the hardware we can decrease the timeout for control packets significantly. The USB4 spec recommends 10 ms +- 1 ms but we use slightly larger value (100 ms) which is recommendation from Intel Thunderbolt firmware folks. When firmware connection manager is running then we keep using the existing 5000 ms. To implement this we move the control channel allocation to tb_domain_alloc(), and pass the timeout from that function to the tb_ctl_alloc(). Then make both connection manager implementations pass the timeout when they alloc the domain structure. While there update kernel-doc of struct tb_ctl to match the reality. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/ctl.c | 15 +++++--- drivers/thunderbolt/ctl.h | 5 ++- drivers/thunderbolt/domain.c | 66 +++++++++++++++++------------------- drivers/thunderbolt/icm.c | 2 +- drivers/thunderbolt/tb.c | 4 ++- drivers/thunderbolt/tb.h | 2 +- 6 files changed, 49 insertions(+), 45 deletions(-) diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index b79be1f02d92..0fb5e04191e2 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -29,6 +29,7 @@ * @request_queue_lock: Lock protecting @request_queue * @request_queue: List of outstanding requests * @running: Is the control channel running at the moment + * @timeout_msec: Default timeout for non-raw control messages * @callback: Callback called when hotplug message is received * @callback_data: Data passed to @callback */ @@ -43,6 +44,7 @@ struct tb_ctl { struct list_head request_queue; bool running; + int timeout_msec; event_cb callback; void *callback_data; }; @@ -613,6 +615,7 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl, /** * tb_ctl_alloc() - allocate a control channel * @nhi: Pointer to NHI + * @timeout_msec: Default timeout used with non-raw control messages * @cb: Callback called for plug events * @cb_data: Data passed to @cb * @@ -620,13 +623,15 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl, * * Return: Returns a pointer on success or NULL on failure. */ -struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data) +struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb, + void *cb_data) { int i; struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL); if (!ctl) return NULL; ctl->nhi = nhi; + ctl->timeout_msec = timeout_msec; ctl->callback = cb; ctl->callback_data = cb_data; @@ -829,7 +834,7 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route) req->response_size = sizeof(reply); req->response_type = TB_CFG_PKG_RESET; - res = tb_cfg_request_sync(ctl, req, TB_CFG_DEFAULT_TIMEOUT); + res = tb_cfg_request_sync(ctl, req, ctl->timeout_msec); tb_cfg_request_put(req); @@ -1005,7 +1010,7 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port, enum tb_cfg_space space, u32 offset, u32 length) { struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port, - space, offset, length, TB_CFG_DEFAULT_TIMEOUT); + space, offset, length, ctl->timeout_msec); switch (res.err) { case 0: /* Success */ @@ -1031,7 +1036,7 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port, enum tb_cfg_space space, u32 offset, u32 length) { struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port, - space, offset, length, TB_CFG_DEFAULT_TIMEOUT); + space, offset, length, ctl->timeout_msec); switch (res.err) { case 0: /* Success */ @@ -1069,7 +1074,7 @@ int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route) u32 dummy; struct tb_cfg_result res = tb_cfg_read_raw(ctl, &dummy, route, 0, TB_CFG_SWITCH, 0, 1, - TB_CFG_DEFAULT_TIMEOUT); + ctl->timeout_msec); if (res.err == 1) return -EIO; if (res.err) diff --git a/drivers/thunderbolt/ctl.h b/drivers/thunderbolt/ctl.h index 2eafbfea5dff..e8c64898dfce 100644 --- a/drivers/thunderbolt/ctl.h +++ b/drivers/thunderbolt/ctl.h @@ -21,15 +21,14 @@ struct tb_ctl; typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type, const void *buf, size_t size); -struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data); +struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb, + void *cb_data); void tb_ctl_start(struct tb_ctl *ctl); void tb_ctl_stop(struct tb_ctl *ctl); void tb_ctl_free(struct tb_ctl *ctl); /* configuration commands */ -#define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */ - struct tb_cfg_result { u64 response_route; u32 response_port; /* diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index 89ae614eaba2..039486b61b6a 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -341,9 +341,34 @@ struct device_type tb_domain_type = { .release = tb_domain_release, }; +static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type, + const void *buf, size_t size) +{ + struct tb *tb = data; + + if (!tb->cm_ops->handle_event) { + tb_warn(tb, "domain does not have event handler\n"); + return true; + } + + switch (type) { + case TB_CFG_PKG_XDOMAIN_REQ: + case TB_CFG_PKG_XDOMAIN_RESP: + if (tb_is_xdomain_enabled()) + return tb_xdomain_handle_request(tb, type, buf, size); + break; + + default: + tb->cm_ops->handle_event(tb, type, buf, size); + } + + return true; +} + /** * tb_domain_alloc() - Allocate a domain * @nhi: Pointer to the host controller + * @timeout_msec: Control channel timeout for non-raw messages * @privsize: Size of the connection manager private data * * Allocates and initializes a new Thunderbolt domain. Connection @@ -355,7 +380,7 @@ struct device_type tb_domain_type = { * * Return: allocated domain structure on %NULL in case of error */ -struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) +struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize) { struct tb *tb; @@ -382,6 +407,10 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) if (!tb->wq) goto err_remove_ida; + tb->ctl = tb_ctl_alloc(nhi, timeout_msec, tb_domain_event_cb, tb); + if (!tb->ctl) + goto err_destroy_wq; + tb->dev.parent = &nhi->pdev->dev; tb->dev.bus = &tb_bus_type; tb->dev.type = &tb_domain_type; @@ -391,6 +420,8 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) return tb; +err_destroy_wq: + destroy_workqueue(tb->wq); err_remove_ida: ida_simple_remove(&tb_domain_ida, tb->index); err_free: @@ -399,30 +430,6 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) return NULL; } -static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type, - const void *buf, size_t size) -{ - struct tb *tb = data; - - if (!tb->cm_ops->handle_event) { - tb_warn(tb, "domain does not have event handler\n"); - return true; - } - - switch (type) { - case TB_CFG_PKG_XDOMAIN_REQ: - case TB_CFG_PKG_XDOMAIN_RESP: - if (tb_is_xdomain_enabled()) - return tb_xdomain_handle_request(tb, type, buf, size); - break; - - default: - tb->cm_ops->handle_event(tb, type, buf, size); - } - - return true; -} - /** * tb_domain_add() - Add domain to the system * @tb: Domain to add @@ -442,13 +449,6 @@ int tb_domain_add(struct tb *tb) return -EINVAL; mutex_lock(&tb->lock); - - tb->ctl = tb_ctl_alloc(tb->nhi, tb_domain_event_cb, tb); - if (!tb->ctl) { - ret = -ENOMEM; - goto err_unlock; - } - /* * tb_schedule_hotplug_handler may be called as soon as the config * channel is started. Thats why we have to hold the lock here. @@ -493,8 +493,6 @@ int tb_domain_add(struct tb *tb) device_del(&tb->dev); err_ctl_stop: tb_ctl_stop(tb->ctl); -err_unlock: - mutex_unlock(&tb->lock); return ret; } diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index f6f605d48371..c111b946c64d 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -2416,7 +2416,7 @@ struct tb *icm_probe(struct tb_nhi *nhi) struct icm *icm; struct tb *tb; - tb = tb_domain_alloc(nhi, sizeof(struct icm)); + tb = tb_domain_alloc(nhi, ICM_TIMEOUT, sizeof(struct icm)); if (!tb) return NULL; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index c348b1fc0efc..4b3947965856 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -15,6 +15,8 @@ #include "tb_regs.h" #include "tunnel.h" +#define TB_TIMEOUT 100 /* ms */ + /** * struct tb_cm - Simple Thunderbolt connection manager * @tunnel_list: List of active tunnels @@ -1562,7 +1564,7 @@ struct tb *tb_probe(struct tb_nhi *nhi) struct tb_cm *tcm; struct tb *tb; - tb = tb_domain_alloc(nhi, sizeof(*tcm)); + tb = tb_domain_alloc(nhi, TB_TIMEOUT, sizeof(*tcm)); if (!tb) return NULL; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index beea88c34c0f..d6ad45686488 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -625,7 +625,7 @@ void tb_domain_exit(void); int tb_xdomain_init(void); void tb_xdomain_exit(void); -struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize); +struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize); int tb_domain_add(struct tb *tb); void tb_domain_remove(struct tb *tb); int tb_domain_suspend_noirq(struct tb *tb); From patchwork Thu Mar 4 12:31:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F0D0C43332 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57C5D64F3A for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240605AbhCDMdW (ORCPT ); Thu, 4 Mar 2021 07:33:22 -0500 Received: from mga18.intel.com ([134.134.136.126]:37321 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240614AbhCDMdO (ORCPT ); Thu, 4 Mar 2021 07:33:14 -0500 IronPort-SDR: v4NoApf6/xB1tyMgpITHM6hRz0Ls4bigAbA6Z4/Ok4YM86SLuCu3DzvrgFBerBRJpX5GlOdgaq NVg+Qj+HD3fQ== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="175035984" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="175035984" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:29 -0800 IronPort-SDR: A6NQ3MoiuWyWnM+uvFHbs8bwjUXBDkmHZ5ScbetPz6CjODxeplW4Q/kTErMhemYeBdwpb9v4Qq zJDopct6GXZA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="400589989" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 04 Mar 2021 04:31:26 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 589172B7; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 04/18] Documentation / thunderbolt: Drop speed/lanes entries for XDomain Date: Thu, 4 Mar 2021 15:31:11 +0300 Message-Id: <20210304123125.43630-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org These are actually not needed as we already have similar entries that apply to all devices on the Thunderbolt bus. Cc: Isaac Hazan Signed-off-by: Mika Westerberg --- .../ABI/testing/sysfs-bus-thunderbolt | 28 ------------------- 1 file changed, 28 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index d7f09d011b6d..bfa4ca6f3fc1 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -1,31 +1,3 @@ -What: /sys/bus/thunderbolt/devices//rx_speed -Date: Feb 2021 -KernelVersion: 5.11 -Contact: Isaac Hazan -Description: This attribute reports the XDomain RX speed per lane. - All RX lanes run at the same speed. - -What: /sys/bus/thunderbolt/devices//rx_lanes -Date: Feb 2021 -KernelVersion: 5.11 -Contact: Isaac Hazan -Description: This attribute reports the number of RX lanes the XDomain - is using simultaneously through its upstream port. - -What: /sys/bus/thunderbolt/devices//tx_speed -Date: Feb 2021 -KernelVersion: 5.11 -Contact: Isaac Hazan -Description: This attribute reports the XDomain TX speed per lane. - All TX lanes run at the same speed. - -What: /sys/bus/thunderbolt/devices//tx_lanes -Date: Feb 2021 -KernelVersion: 5.11 -Contact: Isaac Hazan -Description: This attribute reports number of TX lanes the XDomain - is using simultaneously through its upstream port. - What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl Date: Jun 2018 KernelVersion: 4.17 From patchwork Thu Mar 4 12:31:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FB88C15502 for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 593B264F40 for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240802AbhCDMdX (ORCPT ); Thu, 4 Mar 2021 07:33:23 -0500 Received: from mga07.intel.com ([134.134.136.100]:47218 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240795AbhCDMdR (ORCPT ); Thu, 4 Mar 2021 07:33:17 -0500 IronPort-SDR: 1LokIG9WDc1nY/Pngc4SlwXO7RFScTUXSo7OVzryNA+iywqfFmsI1Lim5cXMhfY5sz145YVNu/ FdQ+6TQoArdA== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="251444259" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="251444259" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: OG4/PFjM71UZlrAoJxFahoECFI/eAaZS72wRmEHJiqfVkSbPL3u5lAxs2eN84VW3AiCw6lHEmf gRFV479VVxIQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="368170056" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 6126B39E; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 05/18] thunderbolt: Add more logging to XDomain connections Date: Thu, 4 Mar 2021 15:31:12 +0300 Message-Id: <20210304123125.43630-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently the driver is pretty quiet when another host is connected which makes debugging possible issues harder. For this reason add more logging on debug level that can be turned on as needed. While there log the host-to-host connection on info level analogous to routers and retimers. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/xdomain.c | 34 +++++++++++++++++++++++++++++++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 7cf8b9c85ab7..584bb5ec06f8 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -591,6 +591,8 @@ static void tb_xdp_handle_request(struct work_struct *work) finalize_property_block(); + tb_dbg(tb, "%llx: received XDomain request %#x\n", route, pkg->type); + switch (pkg->type) { case PROPERTIES_REQUEST: ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid, @@ -1002,9 +1004,12 @@ static void tb_xdomain_get_uuid(struct work_struct *work) uuid_t uuid; int ret; + dev_dbg(&xd->dev, "requesting remote UUID\n"); + ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid); if (ret < 0) { if (xd->uuid_retries-- > 0) { + dev_dbg(&xd->dev, "failed to request UUID, retrying\n"); queue_delayed_work(xd->tb->wq, &xd->get_uuid_work, msecs_to_jiffies(100)); } else { @@ -1013,6 +1018,8 @@ static void tb_xdomain_get_uuid(struct work_struct *work) return; } + dev_dbg(&xd->dev, "got remote UUID %pUb\n", &uuid); + if (uuid_equal(&uuid, xd->local_uuid)) dev_dbg(&xd->dev, "intra-domain loop detected\n"); @@ -1052,11 +1059,15 @@ static void tb_xdomain_get_properties(struct work_struct *work) u32 gen = 0; int ret; + dev_dbg(&xd->dev, "requesting remote properties\n"); + ret = tb_xdp_properties_request(tb->ctl, xd->route, xd->local_uuid, xd->remote_uuid, xd->properties_retries, &block, &gen); if (ret < 0) { if (xd->properties_retries-- > 0) { + dev_dbg(&xd->dev, + "failed to request remote properties, retrying\n"); queue_delayed_work(xd->tb->wq, &xd->get_properties_work, msecs_to_jiffies(1000)); } else { @@ -1123,6 +1134,11 @@ static void tb_xdomain_get_properties(struct work_struct *work) dev_err(&xd->dev, "failed to add XDomain device\n"); return; } + dev_info(&xd->dev, "new host found, vendor=%#x device=%#x\n", + xd->vendor, xd->device); + if (xd->vendor_name && xd->device_name) + dev_info(&xd->dev, "%s %s\n", xd->vendor_name, + xd->device_name); } else { kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE); } @@ -1143,13 +1159,19 @@ static void tb_xdomain_properties_changed(struct work_struct *work) properties_changed_work.work); int ret; + dev_dbg(&xd->dev, "sending properties changed notification\n"); + ret = tb_xdp_properties_changed_request(xd->tb->ctl, xd->route, xd->properties_changed_retries, xd->local_uuid); if (ret) { - if (xd->properties_changed_retries-- > 0) + if (xd->properties_changed_retries-- > 0) { + dev_dbg(&xd->dev, + "failed to send properties changed notification, retrying\n"); queue_delayed_work(xd->tb->wq, &xd->properties_changed_work, msecs_to_jiffies(1000)); + } + dev_err(&xd->dev, "failed to send properties changed notification\n"); return; } @@ -1390,6 +1412,10 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent, xd->dev.groups = xdomain_attr_groups; dev_set_name(&xd->dev, "%u-%llx", tb->index, route); + dev_dbg(&xd->dev, "local UUID %pUb\n", local_uuid); + if (remote_uuid) + dev_dbg(&xd->dev, "remote UUID %pUb\n", remote_uuid); + /* * This keeps the DMA powered on as long as we have active * connection to another host. @@ -1452,10 +1478,12 @@ void tb_xdomain_remove(struct tb_xdomain *xd) pm_runtime_put_noidle(&xd->dev); pm_runtime_set_suspended(&xd->dev); - if (!device_is_registered(&xd->dev)) + if (!device_is_registered(&xd->dev)) { put_device(&xd->dev); - else + } else { + dev_info(&xd->dev, "host disconnected\n"); device_unregister(&xd->dev); + } } /** From patchwork Thu Mar 4 12:31:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2A87C43333 for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCB0564F36 for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241007AbhCDMfA (ORCPT ); Thu, 4 Mar 2021 07:35:00 -0500 Received: from mga04.intel.com ([192.55.52.120]:10302 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240999AbhCDMei (ORCPT ); Thu, 4 Mar 2021 07:34:38 -0500 IronPort-SDR: byBEoBVaIeCv0zXBxX6fwbzPRJdf+VyNY66Dz0iOGK/d3l3CP+QGueVrw7Yb7qGTDioG1HMH8p 7Zcwa29Qdt9g== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="184994144" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="184994144" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: DaQZieLEaGw+ejFw2pBBS5j1M5Y3GwVgbmYpioTP7xt/jzPBg4BEVetYvMXZDyl9NmgsmzIwhf FDqgeK4cO4tw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="374534676" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 69CFF3C1; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 06/18] thunderbolt: Do not re-establish XDomain DMA paths automatically Date: Thu, 4 Mar 2021 15:31:13 +0300 Message-Id: <20210304123125.43630-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This step is actually not needed. The service drivers themselves will handle this once they have negotiated the service up and running again with the remote side. Also dropping this makes it easier to add support for multiple DMA tunnels over a single XDomain connection. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/xdomain.c | 35 ++--------------------------------- include/linux/thunderbolt.h | 2 -- 2 files changed, 2 insertions(+), 35 deletions(-) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 584bb5ec06f8..a1657663a95e 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -946,19 +946,6 @@ static int populate_properties(struct tb_xdomain *xd, return 0; } -/* Called with @xd->lock held */ -static void tb_xdomain_restore_paths(struct tb_xdomain *xd) -{ - if (!xd->resume) - return; - - xd->resume = false; - if (xd->transmit_path) { - dev_dbg(&xd->dev, "re-establishing DMA path\n"); - tb_domain_approve_xdomain_paths(xd->tb, xd); - } -} - static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) { return tb_to_switch(xd->dev.parent); @@ -1084,16 +1071,8 @@ static void tb_xdomain_get_properties(struct work_struct *work) mutex_lock(&xd->lock); /* Only accept newer generation properties */ - if (xd->properties && gen <= xd->property_block_gen) { - /* - * On resume it is likely that the properties block is - * not changed (unless the other end added or removed - * services). However, we need to make sure the existing - * DMA paths are restored properly. - */ - tb_xdomain_restore_paths(xd); + if (xd->properties && gen <= xd->property_block_gen) goto err_free_block; - } dir = tb_property_parse_dir(block, ret); if (!dir) { @@ -1118,8 +1097,6 @@ static void tb_xdomain_get_properties(struct work_struct *work) tb_xdomain_update_link_attributes(xd); - tb_xdomain_restore_paths(xd); - mutex_unlock(&xd->lock); kfree(block); @@ -1332,15 +1309,7 @@ static int __maybe_unused tb_xdomain_suspend(struct device *dev) static int __maybe_unused tb_xdomain_resume(struct device *dev) { - struct tb_xdomain *xd = tb_to_xdomain(dev); - - /* - * Ask tb_xdomain_get_properties() restore any existing DMA - * paths after properties are re-read. - */ - xd->resume = true; - start_handshake(xd); - + start_handshake(tb_to_xdomain(dev)); return 0; } diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 659a0a810fa1..7ec977161f5c 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -185,7 +185,6 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @link_speed: Speed of the link in Gb/s * @link_width: Width of the link (1 or 2) * @is_unplugged: The XDomain is unplugged - * @resume: The XDomain is being resumed * @needs_uuid: If the XDomain does not have @remote_uuid it will be * queried first * @transmit_path: HopID which the remote end expects us to transmit @@ -231,7 +230,6 @@ struct tb_xdomain { unsigned int link_speed; unsigned int link_width; bool is_unplugged; - bool resume; bool needs_uuid; u16 transmit_path; u16 transmit_ring; From patchwork Thu Mar 4 12:31:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36000C43381 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0FF5264F1E for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240815AbhCDMdX (ORCPT ); Thu, 4 Mar 2021 07:33:23 -0500 Received: from mga05.intel.com ([192.55.52.43]:28648 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240793AbhCDMdR (ORCPT ); Thu, 4 Mar 2021 07:33:17 -0500 IronPort-SDR: DY77keBVuFDQh+P3ooo494ZblgDX50DgcMAYm6GMlKuBixXPEX01E16vwszCc/jz2ZnKTgcdEQ jXJnxi+uWjYw== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="272407035" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="272407035" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: ZMBkdcqeVaa10YyY/yZr1AIrzgTmoQQZDwHiDWdtyD+Gtlr8bxfEDcq5PHfQr4g/nh/b7cNXCh gzEIwyEztrZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="600508099" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 729C83EA; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 07/18] thunderbolt: Use pseudo-random number as initial property block generation Date: Thu, 4 Mar 2021 15:31:14 +0300 Message-Id: <20210304123125.43630-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org As recommended by USB4 inter-domain service spec use pseudo-random value instead of zero as initial XDomain property block generation value. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/xdomain.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index a1657663a95e..cfe6fa7e84f4 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -1880,6 +1881,7 @@ int tb_xdomain_init(void) tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1); tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100); + xdomain_property_block_gen = prandom_u32(); return 0; } From patchwork Thu Mar 4 12:31:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84893C433E0 for ; Thu, 4 Mar 2021 12:34:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EE2664F1E for ; Thu, 4 Mar 2021 12:34:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233982AbhCDMe1 (ORCPT ); Thu, 4 Mar 2021 07:34:27 -0500 Received: from mga07.intel.com ([134.134.136.100]:47267 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240809AbhCDMeD (ORCPT ); Thu, 4 Mar 2021 07:34:03 -0500 IronPort-SDR: MVPtonPWwHm7243bV7/DtI/TxrU/XQ9yoEiWruyOXK1jnfZnfO1SAjfAym1b+bypM1foCkD4c8 RwujrBXaDQUA== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="251444264" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="251444264" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: HGkeN8tzq9ZNZLxu7DtdIWQJoMhWhiWl4FocJDigyJz9FzKM5OqonqDL0ITUJd/K6tRjeWiQwm 3FCbVtO1+RNA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="368170058" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 7B506411; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 08/18] thunderbolt: Align XDomain protocol timeouts with the spec Date: Thu, 4 Mar 2021 15:31:15 +0300 Message-Id: <20210304123125.43630-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The USB4 inter-domain service spec has slightly different recommended timeouts for the XDomain protocol so align the driver with those. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/xdomain.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index cfe6fa7e84f4..ffa9cc9e0e7d 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -19,9 +19,9 @@ #include "tb.h" -#define XDOMAIN_DEFAULT_TIMEOUT 5000 /* ms */ +#define XDOMAIN_DEFAULT_TIMEOUT 1000 /* ms */ #define XDOMAIN_UUID_RETRIES 10 -#define XDOMAIN_PROPERTIES_RETRIES 60 +#define XDOMAIN_PROPERTIES_RETRIES 10 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 #define XDOMAIN_BONDING_WAIT 100 /* ms */ From patchwork Thu Mar 4 12:31:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B8A8C4160E for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4510564F42 for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240828AbhCDMdY (ORCPT ); Thu, 4 Mar 2021 07:33:24 -0500 Received: from mga12.intel.com ([192.55.52.136]:64925 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240790AbhCDMdR (ORCPT ); Thu, 4 Mar 2021 07:33:17 -0500 IronPort-SDR: v5rkH88SEXvPBlgm7FZvrUbXfWx+wb4bWZZQ9hGgkhCfN0F6Wy3GIjaH2oHw/SVymTkG+Pv/O+ 6y/lY1/EPG6A== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="166662647" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="166662647" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: WVxsua+vzA1ipbqiaRRk6YFiLfvoi2lSCGml4HxF5zPGG+EqgWLYJtWjTZqocAPgEFK1pxpS/l S9jJKSjHi4+w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="435785755" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 8418444E; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 09/18] thunderbolt: Add tb_property_copy_dir() Date: Thu, 4 Mar 2021 15:31:16 +0300 Message-Id: <20210304123125.43630-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This function takes a deep copy of the properties. We need this in order to support more dynamic properties per XDomain connection as required by the USB4 inter-domain service spec. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/property.c | 71 ++++++++++++++++++++++++++++++++++ include/linux/thunderbolt.h | 1 + 2 files changed, 72 insertions(+) diff --git a/drivers/thunderbolt/property.c b/drivers/thunderbolt/property.c index d5b0cdb8f0b1..dc555cda98e6 100644 --- a/drivers/thunderbolt/property.c +++ b/drivers/thunderbolt/property.c @@ -501,6 +501,77 @@ ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block, return ret < 0 ? ret : 0; } +/** + * tb_property_copy_dir() - Take a deep copy of directory + * @dir: Directory to copy + * + * This function takes a deep copy of @dir and returns back the copy. In + * case of error returns %NULL. The resulting directory needs to be + * released by calling tb_property_free_dir(). + */ +struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir) +{ + struct tb_property *property, *p = NULL; + struct tb_property_dir *d; + + if (!dir) + return NULL; + + d = tb_property_create_dir(dir->uuid); + if (!d) + return NULL; + + list_for_each_entry(property, &dir->properties, list) { + struct tb_property *p; + + p = tb_property_alloc(property->key, property->type); + if (!p) + goto err_free; + + p->length = property->length; + + switch (property->type) { + case TB_PROPERTY_TYPE_DIRECTORY: + p->value.dir = tb_property_copy_dir(property->value.dir); + if (!p->value.dir) + goto err_free; + break; + + case TB_PROPERTY_TYPE_DATA: + p->value.data = kmemdup(property->value.data, + property->length * 4, + GFP_KERNEL); + if (!p->value.data) + goto err_free; + break; + + case TB_PROPERTY_TYPE_TEXT: + p->value.text = kzalloc(p->length * 4, GFP_KERNEL); + if (!p->value.text) + goto err_free; + strcpy(p->value.text, property->value.text); + break; + + case TB_PROPERTY_TYPE_VALUE: + p->value.immediate = property->value.immediate; + break; + + default: + break; + } + + list_add_tail(&p->list, &d->properties); + } + + return d; + +err_free: + kfree(p); + tb_property_free_dir(d); + + return NULL; +} + /** * tb_property_add_immediate() - Add immediate property to directory * @parent: Directory to add the property diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 7ec977161f5c..003a9ad29168 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -146,6 +146,7 @@ struct tb_property_dir *tb_property_parse_dir(const u32 *block, size_t block_len); ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block, size_t block_len); +struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir); struct tb_property_dir *tb_property_create_dir(const uuid_t *uuid); void tb_property_free_dir(struct tb_property_dir *dir); int tb_property_add_immediate(struct tb_property_dir *parent, const char *key, From patchwork Thu Mar 4 12:31:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6A14C433E6 for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 83E4E64F2B for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240938AbhCDMdx (ORCPT ); Thu, 4 Mar 2021 07:33:53 -0500 Received: from mga01.intel.com ([192.55.52.88]:37604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240813AbhCDMdU (ORCPT ); Thu, 4 Mar 2021 07:33:20 -0500 IronPort-SDR: elbyKQhL8eVKT7XP1exYRC1Yoj4SJju+xSBZZAMZCnhAAEtOsRrlg7JG5PDckh5Jlb5Ky+4QLZ s4rj5kxmyiMA== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="207113158" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="207113158" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: v9oNlzWy1RaNG8A9AqnAFqc3Zf0nq532l4zRfXP042y3xmQw8jFvcLmXEUbang2mZ/9hZnkcrD /JQdexYETMOA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="406837354" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 04 Mar 2021 04:31:30 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 8DAAC487; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 10/18] thunderbolt: Add support for maxhopid XDomain property Date: Thu, 4 Mar 2021 15:31:17 +0300 Message-Id: <20210304123125.43630-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 inter-domain spec mandates that the compatible hosts expose a new property "maxhopid" that tells the connection manager on the other side what is the maximum supported input HopID over the connection. Since this is depend on the lane adapter the cable is connected it needs to be filled in dynamically. For this reason we take a copy of the global properties and fill then for each XDomain connection upon first connect, and then keep updating it if the generation changes as services are being added/removed. We also take advantage of this copy to fill in the hostname. We also expose this maxhopid as an attribute under each XDomain device. While there drop kernel-doc entry for property_lock which seems to be left there when the structure was originally introduced. Signed-off-by: Mika Westerberg --- .../ABI/testing/sysfs-bus-thunderbolt | 7 + drivers/thunderbolt/xdomain.c | 206 ++++++++++-------- include/linux/thunderbolt.h | 19 +- 3 files changed, 138 insertions(+), 94 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index bfa4ca6f3fc1..c41c68f64693 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -134,6 +134,13 @@ Contact: thunderbolt-software@lists.01.org Description: This attribute contains name of this device extracted from the device DROM. +What: /sys/bus/thunderbolt/devices/.../maxhopid +Date: Jul 2021 +KernelVersion: 5.13 +Contact: Mika Westerberg +Description: Only set for XDomains. The maximum HopID the other host + supports as its input HopID. + What: /sys/bus/thunderbolt/devices/.../rx_speed Date: Jan 2020 KernelVersion: 5.5 diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index ffa9cc9e0e7d..ab56757d7c24 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -24,6 +24,7 @@ #define XDOMAIN_PROPERTIES_RETRIES 10 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 #define XDOMAIN_BONDING_WAIT 100 /* ms */ +#define XDOMAIN_DEFAULT_MAX_HOPID 15 struct xdomain_request_work { struct work_struct work; @@ -35,13 +36,15 @@ static bool tb_xdomain_enabled = true; module_param_named(xdomain, tb_xdomain_enabled, bool, 0444); MODULE_PARM_DESC(xdomain, "allow XDomain protocol (default: true)"); -/* Serializes access to the properties and protocol handlers below */ +/* + * Serializes access to the properties and protocol handlers below. If + * you need to take both this lock and the struct tb_xdomain lock, take + * this one first. + */ static DEFINE_MUTEX(xdomain_lock); /* Properties exposed to the remote domains */ static struct tb_property_dir *xdomain_property_dir; -static u32 *xdomain_property_block; -static u32 xdomain_property_block_len; static u32 xdomain_property_block_gen; /* Additional protocol handlers */ @@ -386,8 +389,7 @@ static int tb_xdp_properties_request(struct tb_ctl *ctl, u64 route, } static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl *ctl, - u64 route, u8 sequence, const uuid_t *src_uuid, - const struct tb_xdp_properties *req) + struct tb_xdomain *xd, u8 sequence, const struct tb_xdp_properties *req) { struct tb_xdp_properties_response *res; size_t total_size; @@ -399,39 +401,39 @@ static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl *ctl, * protocol supports forwarding, though which we might add * support later on. */ - if (!uuid_equal(src_uuid, &req->dst_uuid)) { - tb_xdp_error_response(ctl, route, sequence, + if (!uuid_equal(xd->local_uuid, &req->dst_uuid)) { + tb_xdp_error_response(ctl, xd->route, sequence, ERROR_UNKNOWN_DOMAIN); return 0; } - mutex_lock(&xdomain_lock); + mutex_lock(&xd->lock); - if (req->offset >= xdomain_property_block_len) { - mutex_unlock(&xdomain_lock); + if (req->offset >= xd->local_property_block_len) { + mutex_unlock(&xd->lock); return -EINVAL; } - len = xdomain_property_block_len - req->offset; + len = xd->local_property_block_len - req->offset; len = min_t(u16, len, TB_XDP_PROPERTIES_MAX_DATA_LENGTH); total_size = sizeof(*res) + len * 4; res = kzalloc(total_size, GFP_KERNEL); if (!res) { - mutex_unlock(&xdomain_lock); + mutex_unlock(&xd->lock); return -ENOMEM; } - tb_xdp_fill_header(&res->hdr, route, sequence, PROPERTIES_RESPONSE, + tb_xdp_fill_header(&res->hdr, xd->route, sequence, PROPERTIES_RESPONSE, total_size); - res->generation = xdomain_property_block_gen; - res->data_length = xdomain_property_block_len; + res->generation = xd->local_property_block_gen; + res->data_length = xd->local_property_block_len; res->offset = req->offset; - uuid_copy(&res->src_uuid, src_uuid); + uuid_copy(&res->src_uuid, xd->local_uuid); uuid_copy(&res->dst_uuid, &req->src_uuid); - memcpy(res->data, &xdomain_property_block[req->offset], len * 4); + memcpy(res->data, &xd->local_property_block[req->offset], len * 4); - mutex_unlock(&xdomain_lock); + mutex_unlock(&xd->lock); ret = __tb_xdomain_response(ctl, res, total_size, TB_CFG_PKG_XDOMAIN_RESP); @@ -513,52 +515,63 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler) } EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler); -static int rebuild_property_block(void) +static void update_property_block(struct tb_xdomain *xd) { - u32 *block, len; - int ret; - - ret = tb_property_format_dir(xdomain_property_dir, NULL, 0); - if (ret < 0) - return ret; - - len = ret; - - block = kcalloc(len, sizeof(u32), GFP_KERNEL); - if (!block) - return -ENOMEM; + mutex_lock(&xdomain_lock); + mutex_lock(&xd->lock); + /* + * If the local property block is not up-to-date, rebuild it now + * based on the global property template. + */ + if (!xd->local_property_block || + xd->local_property_block_gen < xdomain_property_block_gen) { + struct tb_property_dir *dir; + int ret, block_len; + u32 *block; + + dir = tb_property_copy_dir(xdomain_property_dir); + if (!dir) { + dev_warn(&xd->dev, "failed to copy properties\n"); + goto out_unlock; + } - ret = tb_property_format_dir(xdomain_property_dir, block, len); - if (ret) { - kfree(block); - return ret; - } + /* Fill in non-static properties now */ + tb_property_add_text(dir, "deviceid", utsname()->nodename); + tb_property_add_immediate(dir, "maxhopid", xd->local_max_hopid); - kfree(xdomain_property_block); - xdomain_property_block = block; - xdomain_property_block_len = len; - xdomain_property_block_gen++; + ret = tb_property_format_dir(dir, NULL, 0); + if (ret < 0) { + dev_warn(&xd->dev, "local property block creation failed\n"); + tb_property_free_dir(dir); + goto out_unlock; + } - return 0; -} + block_len = ret; + block = kcalloc(block_len, sizeof(*block), GFP_KERNEL); + if (!block) { + tb_property_free_dir(dir); + goto out_unlock; + } -static void finalize_property_block(void) -{ - const struct tb_property *nodename; + ret = tb_property_format_dir(dir, block, block_len); + if (ret) { + dev_warn(&xd->dev, "property block generation failed\n"); + tb_property_free_dir(dir); + kfree(block); + goto out_unlock; + } - /* - * On first XDomain connection we set up the the system - * nodename. This delayed here because userspace may not have it - * set when the driver is first probed. - */ - mutex_lock(&xdomain_lock); - nodename = tb_property_find(xdomain_property_dir, "deviceid", - TB_PROPERTY_TYPE_TEXT); - if (!nodename) { - tb_property_add_text(xdomain_property_dir, "deviceid", - utsname()->nodename); - rebuild_property_block(); + tb_property_free_dir(dir); + /* Release the previous block */ + kfree(xd->local_property_block); + /* Assign new one */ + xd->local_property_block = block; + xd->local_property_block_len = block_len; + xd->local_property_block_gen = xdomain_property_block_gen; } + +out_unlock: + mutex_unlock(&xd->lock); mutex_unlock(&xdomain_lock); } @@ -569,6 +582,7 @@ static void tb_xdp_handle_request(struct work_struct *work) const struct tb_xdomain_header *xhdr = &pkg->xd_hdr; struct tb *tb = xw->tb; struct tb_ctl *ctl = tb->ctl; + struct tb_xdomain *xd; const uuid_t *uuid; int ret = 0; u32 sequence; @@ -590,19 +604,21 @@ static void tb_xdp_handle_request(struct work_struct *work) goto out; } - finalize_property_block(); - tb_dbg(tb, "%llx: received XDomain request %#x\n", route, pkg->type); + xd = tb_xdomain_find_by_route_locked(tb, route); + if (xd) + update_property_block(xd); + switch (pkg->type) { case PROPERTIES_REQUEST: - ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid, - (const struct tb_xdp_properties *)pkg); + if (xd) { + ret = tb_xdp_properties_response(tb, ctl, xd, sequence, + (const struct tb_xdp_properties *)pkg); + } break; - case PROPERTIES_CHANGED_REQUEST: { - struct tb_xdomain *xd; - + case PROPERTIES_CHANGED_REQUEST: ret = tb_xdp_properties_changed_response(ctl, route, sequence); /* @@ -610,17 +626,11 @@ static void tb_xdp_handle_request(struct work_struct *work) * the xdomain related to this connection as well in * case there is a change in services it offers. */ - xd = tb_xdomain_find_by_route_locked(tb, route); - if (xd) { - if (device_is_registered(&xd->dev)) { - queue_delayed_work(tb->wq, &xd->get_properties_work, - msecs_to_jiffies(50)); - } - tb_xdomain_put(xd); + if (xd && device_is_registered(&xd->dev)) { + queue_delayed_work(tb->wq, &xd->get_properties_work, + msecs_to_jiffies(50)); } - break; - } case UUID_REQUEST_OLD: case UUID_REQUEST: @@ -633,6 +643,8 @@ static void tb_xdp_handle_request(struct work_struct *work) break; } + tb_xdomain_put(xd); + if (ret) { tb_warn(tb, "failed to send XDomain response for %#x\n", pkg->type); @@ -814,7 +826,7 @@ static int remove_missing_service(struct device *dev, void *data) if (!svc) return 0; - if (!tb_property_find(xd->properties, svc->key, + if (!tb_property_find(xd->remote_properties, svc->key, TB_PROPERTY_TYPE_DIRECTORY)) device_unregister(dev); @@ -874,7 +886,7 @@ static void enumerate_services(struct tb_xdomain *xd) device_for_each_child_reverse(&xd->dev, xd, remove_missing_service); /* Then re-enumerate properties creating new services as we go */ - tb_property_for_each(xd->properties, p) { + tb_property_for_each(xd->remote_properties, p) { if (p->type != TB_PROPERTY_TYPE_DIRECTORY) continue; @@ -931,6 +943,14 @@ static int populate_properties(struct tb_xdomain *xd, return -EINVAL; xd->vendor = p->value.immediate; + p = tb_property_find(dir, "maxhopid", TB_PROPERTY_TYPE_VALUE); + /* + * USB4 inter-domain spec suggests using 15 as HopID if the + * other end does not announce it in a property. This is for + * TBT3 compatibility. + */ + xd->remote_max_hopid = p ? p->value.immediate : XDOMAIN_DEFAULT_MAX_HOPID; + kfree(xd->device_name); xd->device_name = NULL; kfree(xd->vendor_name); @@ -1072,7 +1092,7 @@ static void tb_xdomain_get_properties(struct work_struct *work) mutex_lock(&xd->lock); /* Only accept newer generation properties */ - if (xd->properties && gen <= xd->property_block_gen) + if (xd->remote_properties && gen <= xd->remote_property_block_gen) goto err_free_block; dir = tb_property_parse_dir(block, ret); @@ -1088,13 +1108,13 @@ static void tb_xdomain_get_properties(struct work_struct *work) } /* Release the existing one */ - if (xd->properties) { - tb_property_free_dir(xd->properties); + if (xd->remote_properties) { + tb_property_free_dir(xd->remote_properties); update = true; } - xd->properties = dir; - xd->property_block_gen = gen; + xd->remote_properties = dir; + xd->remote_property_block_gen = gen; tb_xdomain_update_link_attributes(xd); @@ -1180,6 +1200,15 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf) } static DEVICE_ATTR_RO(device_name); +static ssize_t maxhopid_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); + + return sprintf(buf, "%d\n", xd->remote_max_hopid); +} +static DEVICE_ATTR_RO(maxhopid); + static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -1238,6 +1267,7 @@ static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL); static struct attribute *xdomain_attrs[] = { &dev_attr_device.attr, &dev_attr_device_name.attr, + &dev_attr_maxhopid.attr, &dev_attr_rx_lanes.attr, &dev_attr_rx_speed.attr, &dev_attr_tx_lanes.attr, @@ -1263,7 +1293,8 @@ static void tb_xdomain_release(struct device *dev) put_device(xd->dev.parent); - tb_property_free_dir(xd->properties); + kfree(xd->local_property_block); + tb_property_free_dir(xd->remote_properties); ida_destroy(&xd->service_ids); kfree(xd->local_uuid); @@ -1355,6 +1386,7 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent, xd->tb = tb; xd->route = route; + xd->local_max_hopid = down->config.max_in_hop_id; ida_init(&xd->service_ids); mutex_init(&xd->lock); INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid); @@ -1824,11 +1856,7 @@ int tb_register_property_dir(const char *key, struct tb_property_dir *dir) if (ret) goto err_unlock; - ret = rebuild_property_block(); - if (ret) { - remove_directory(key, dir); - goto err_unlock; - } + xdomain_property_block_gen++; mutex_unlock(&xdomain_lock); update_all_xdomains(); @@ -1854,7 +1882,7 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir) mutex_lock(&xdomain_lock); if (remove_directory(key, dir)) - ret = rebuild_property_block(); + xdomain_property_block_gen++; mutex_unlock(&xdomain_lock); if (!ret) @@ -1873,7 +1901,8 @@ int tb_xdomain_init(void) * directories. Those will be added by service drivers * themselves when they are loaded. * - * We also add node name later when first connection is made. + * Rest of the properties are filled dynamically based on these + * when the P2P connection is made. */ tb_property_add_immediate(xdomain_property_dir, "vendorid", PCI_VENDOR_ID_INTEL); @@ -1887,6 +1916,5 @@ int tb_xdomain_init(void) void tb_xdomain_exit(void) { - kfree(xdomain_property_block); tb_property_free_dir(xdomain_property_dir); } diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 003a9ad29168..3e0ce654d60c 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -180,6 +180,8 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @route: Route string the other domain can be reached * @vendor: Vendor ID of the remote domain * @device: Device ID of the demote domain + * @local_max_hopid: Maximum input HopID of this host + * @remote_max_hopid: Maximum input HopID of the remote host * @lock: Lock to serialize access to the following fields of this structure * @vendor_name: Name of the vendor (or %NULL if not known) * @device_name: Name of the device (or %NULL if not known) @@ -193,9 +195,11 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @receive_path: HopID which we expect the remote end to transmit * @receive_ring: Local ring (hop) where incoming packets arrive * @service_ids: Used to generate IDs for the services - * @properties: Properties exported by the remote domain - * @property_block_gen: Generation of @properties - * @properties_lock: Lock protecting @properties. + * @local_property_block: Local block of properties + * @local_property_block_gen: Generation of @local_property_block + * @local_property_block_len: Length of the @local_property_block in dwords + * @remote_properties: Properties exported by the remote domain + * @remote_property_block_gen: Generation of @remote_properties * @get_uuid_work: Work used to retrieve @remote_uuid * @uuid_retries: Number of times left @remote_uuid is requested before * giving up @@ -225,6 +229,8 @@ struct tb_xdomain { u64 route; u16 vendor; u16 device; + unsigned int local_max_hopid; + unsigned int remote_max_hopid; struct mutex lock; const char *vendor_name; const char *device_name; @@ -237,8 +243,11 @@ struct tb_xdomain { u16 receive_path; u16 receive_ring; struct ida service_ids; - struct tb_property_dir *properties; - u32 property_block_gen; + u32 *local_property_block; + u32 local_property_block_gen; + u32 local_property_block_len; + struct tb_property_dir *remote_properties; + u32 remote_property_block_gen; struct delayed_work get_uuid_work; int uuid_retries; struct delayed_work get_properties_work; From patchwork Thu Mar 4 12:31:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BACE3C433DB for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 967BD64F36 for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240944AbhCDMdy (ORCPT ); Thu, 4 Mar 2021 07:33:54 -0500 Received: from mga05.intel.com ([192.55.52.43]:28654 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240816AbhCDMdU (ORCPT ); Thu, 4 Mar 2021 07:33:20 -0500 IronPort-SDR: OUo+ydWREe2SOT7ByBBCfA9oY9UrJalX2LPeMvBvY3cm3Ub1TjFll3WkNguOqFKGYi5olp+rUw OdX5js/iS+dQ== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="272407033" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="272407033" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: a0wT1WU+ciekmGg3kQiHWH2AZImMuiKCKWu82FvGrvrWL/DS4kgcYoiccodbeTQZmt7WPiJZPj KaSbx+GEfl3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="374534678" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 982734D7; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 11/18] thunderbolt: Use dedicated flow control for DMA tunnels Date: Thu, 4 Mar 2021 15:31:18 +0300 Message-Id: <20210304123125.43630-12-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The USB4 inter-domain service spec recommends using dedicated flow control scheme so update the driver accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 6557b6e07009..2e7ec037a73e 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -794,24 +794,14 @@ static u32 tb_dma_credits(struct tb_port *nhi) return min(max_credits, 13U); } -static int tb_dma_activate(struct tb_tunnel *tunnel, bool active) -{ - struct tb_port *nhi = tunnel->src_port; - u32 credits; - - credits = active ? tb_dma_credits(nhi) : 0; - return tb_port_set_initial_credits(nhi, credits); -} - -static void tb_dma_init_path(struct tb_path *path, unsigned int isb, - unsigned int efc, u32 credits) +static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits) { int i; path->egress_fc_enable = efc; path->ingress_fc_enable = TB_PATH_ALL; path->egress_shared_buffer = TB_PATH_NONE; - path->ingress_shared_buffer = isb; + path->ingress_shared_buffer = TB_PATH_NONE; path->priority = 5; path->weight = 1; path->clear_fc = true; @@ -856,7 +846,6 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, if (!tunnel) return NULL; - tunnel->activate = tb_dma_activate; tunnel->src_port = nhi; tunnel->dst_port = dst; @@ -869,8 +858,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, tb_tunnel_free(tunnel); return NULL; } - tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, - credits); + tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits); tunnel->paths[i++] = path; } @@ -881,7 +869,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, tb_tunnel_free(tunnel); return NULL; } - tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); + tb_dma_init_path(path, TB_PATH_ALL, credits); tunnel->paths[i++] = path; } From patchwork Thu Mar 4 12:31:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3613C15507 for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCCC564F3D for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240837AbhCDMdY (ORCPT ); Thu, 4 Mar 2021 07:33:24 -0500 Received: from mga03.intel.com ([134.134.136.65]:7096 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240804AbhCDMdS (ORCPT ); Thu, 4 Mar 2021 07:33:18 -0500 IronPort-SDR: 3B8ta3DjdgKemkJHkmCdOIENWWb5IwgaVR7Lfz6Jt1Mg8HfZuYPnjcB5CCmD6zInhvDtrJ3yk1 ExOToNAUSGBQ== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="187458396" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="187458396" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:32 -0800 IronPort-SDR: TgOHlBOLtg0uXIgwllcPWspByUaBsi7hlnnZV29ENJhC4pzoRDNdTblHIpBrtTJOpqWzQ6TwRZ tV+R+bVlfXNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="384428151" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 04 Mar 2021 04:31:29 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id A3B8E4E4; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 12/18] thunderbolt: Drop unused tb_port_set_initial_credits() Date: Thu, 4 Mar 2021 15:31:19 +0300 Message-Id: <20210304123125.43630-13-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This function is not used anymore in the driver so we can remove it. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 22 ---------------------- drivers/thunderbolt/tb.h | 1 - 2 files changed, 23 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 218869c6ee21..7ac37a1f95e1 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -626,28 +626,6 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits) TB_CFG_PORT, ADP_CS_4, 1); } -/** - * tb_port_set_initial_credits() - Set initial port link credits allocated - * @port: Port to set the initial credits - * @credits: Number of credits to to allocate - * - * Set initial credits value to be used for ingress shared buffering. - */ -int tb_port_set_initial_credits(struct tb_port *port, u32 credits) -{ - u32 data; - int ret; - - ret = tb_port_read(port, &data, TB_CFG_PORT, ADP_CS_5, 1); - if (ret) - return ret; - - data &= ~ADP_CS_5_LCA_MASK; - data |= (credits << ADP_CS_5_LCA_SHIFT) & ADP_CS_5_LCA_MASK; - - return tb_port_write(port, &data, TB_CFG_PORT, ADP_CS_5, 1); -} - /** * tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER * @port: Port whose counters to clear diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index d6ad45686488..ec8cdbc761fa 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -860,7 +860,6 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); -int tb_port_set_initial_credits(struct tb_port *port, u32 credits); int tb_port_clear_counter(struct tb_port *port, int counter); int tb_port_unlock(struct tb_port *port); int tb_port_enable(struct tb_port *port); From patchwork Thu Mar 4 12:31:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDACFC43619 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C282764F35 for ; Thu, 4 Mar 2021 12:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240854AbhCDMdZ (ORCPT ); Thu, 4 Mar 2021 07:33:25 -0500 Received: from mga17.intel.com ([192.55.52.151]:23040 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240809AbhCDMdT (ORCPT ); Thu, 4 Mar 2021 07:33:19 -0500 IronPort-SDR: dRVZYfxFeLHwIHoAOMEdT6u6sRmyxyn4I21NnbTSmhobnFkoNIXkbUgjNhhizJUQFFYMaA3kDu EScIaoppN36w== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="167303975" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="167303975" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:33 -0800 IronPort-SDR: hh+R5OrucoXGscg3FpMq3n0aJpSgjU32WiSNdB4Fc9BSnZwH3dNZBXf9YCTjKlCcoLbFimERWt 97MD3IBSUeLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="586729813" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga005.jf.intel.com with ESMTP; 04 Mar 2021 04:31:30 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id B13D34EB; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 13/18] thunderbolt: Allow multiple DMA tunnels over a single XDomain connection Date: Thu, 4 Mar 2021 15:31:20 +0300 Message-Id: <20210304123125.43630-14-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently we have had an artificial limitation of a single DMA tunnel per XDomain connection. However, hardware wise there is no such limit and software based connection manager can take advantage of all the DMA rings available on the host to establish tunnels. For this reason make the tb_xdomain_[enable|disable]_paths() to take the DMA ring and HopID as parameter instead of storing them in the struct tb_xdomain. We also add API functions to allocate input and output HopIDs of the XDomain connection that the service drivers can use instead of hard-coding. Also convert the two existing service drivers over to this API. Signed-off-by: Mika Westerberg --- drivers/net/thunderbolt.c | 49 +++++++++--- drivers/thunderbolt/dma_test.c | 35 ++++++++- drivers/thunderbolt/domain.c | 24 ++++-- drivers/thunderbolt/icm.c | 32 +++++--- drivers/thunderbolt/tb.c | 48 +++++++----- drivers/thunderbolt/tb.h | 16 +++- drivers/thunderbolt/tunnel.c | 82 ++++++++++++++++--- drivers/thunderbolt/tunnel.h | 8 +- drivers/thunderbolt/xdomain.c | 139 ++++++++++++++++++++++----------- include/linux/thunderbolt.h | 32 +++++--- 10 files changed, 340 insertions(+), 125 deletions(-) diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c index ed3743dc62b9..5c9ec91b6e78 100644 --- a/drivers/net/thunderbolt.c +++ b/drivers/net/thunderbolt.c @@ -28,7 +28,6 @@ #define TBNET_LOGOUT_TIMEOUT 100 #define TBNET_RING_SIZE 256 -#define TBNET_LOCAL_PATH 0xf #define TBNET_LOGIN_RETRIES 60 #define TBNET_LOGOUT_RETRIES 5 #define TBNET_MATCH_FRAGS_ID BIT(1) @@ -154,8 +153,8 @@ struct tbnet_ring { * @login_sent: ThunderboltIP login message successfully sent * @login_received: ThunderboltIP login message received from the remote * host - * @transmit_path: HopID the other end needs to use building the - * opposite side path. + * @local_transmit_path: HopID we are using to send out packets + * @remote_transmit_path: HopID the other end is using to send packets to us * @connection_lock: Lock serializing access to @login_sent, * @login_received and @transmit_path. * @login_retries: Number of login retries currently done @@ -184,7 +183,8 @@ struct tbnet { atomic_t command_id; bool login_sent; bool login_received; - u32 transmit_path; + int local_transmit_path; + int remote_transmit_path; struct mutex connection_lock; int login_retries; struct delayed_work login_work; @@ -257,7 +257,7 @@ static int tbnet_login_request(struct tbnet *net, u8 sequence) atomic_inc_return(&net->command_id)); request.proto_version = TBIP_LOGIN_PROTO_VERSION; - request.transmit_path = TBNET_LOCAL_PATH; + request.transmit_path = net->local_transmit_path; return tb_xdomain_request(xd, &request, sizeof(request), TB_CFG_PKG_XDOMAIN_RESP, &reply, @@ -364,10 +364,10 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout) mutex_lock(&net->connection_lock); if (net->login_sent && net->login_received) { - int retries = TBNET_LOGOUT_RETRIES; + int ret, retries = TBNET_LOGOUT_RETRIES; while (send_logout && retries-- > 0) { - int ret = tbnet_logout_request(net); + ret = tbnet_logout_request(net); if (ret != -ETIMEDOUT) break; } @@ -377,8 +377,16 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout) tbnet_free_buffers(&net->rx_ring); tbnet_free_buffers(&net->tx_ring); - if (tb_xdomain_disable_paths(net->xd)) + ret = tb_xdomain_disable_paths(net->xd, + net->local_transmit_path, + net->rx_ring.ring->hop, + net->remote_transmit_path, + net->tx_ring.ring->hop); + if (ret) netdev_warn(net->dev, "failed to disable DMA paths\n"); + + tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path); + net->remote_transmit_path = 0; } net->login_retries = 0; @@ -424,7 +432,7 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data) if (!ret) { mutex_lock(&net->connection_lock); net->login_received = true; - net->transmit_path = pkg->transmit_path; + net->remote_transmit_path = pkg->transmit_path; /* If we reached the number of max retries or * previous logout, schedule another round of @@ -597,12 +605,18 @@ static void tbnet_connected_work(struct work_struct *work) if (!connected) return; + ret = tb_xdomain_alloc_in_hopid(net->xd, net->remote_transmit_path); + if (ret != net->remote_transmit_path) { + netdev_err(net->dev, "failed to allocate Rx HopID\n"); + return; + } + /* Both logins successful so enable the high-speed DMA paths and * start the network device queue. */ - ret = tb_xdomain_enable_paths(net->xd, TBNET_LOCAL_PATH, + ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path, net->rx_ring.ring->hop, - net->transmit_path, + net->remote_transmit_path, net->tx_ring.ring->hop); if (ret) { netdev_err(net->dev, "failed to enable DMA paths\n"); @@ -629,6 +643,7 @@ static void tbnet_connected_work(struct work_struct *work) err_stop_rings: tb_ring_stop(net->rx_ring.ring); tb_ring_stop(net->tx_ring.ring); + tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path); } static void tbnet_login_work(struct work_struct *work) @@ -851,6 +866,7 @@ static int tbnet_open(struct net_device *dev) struct tb_xdomain *xd = net->xd; u16 sof_mask, eof_mask; struct tb_ring *ring; + int hopid; netif_carrier_off(dev); @@ -862,6 +878,15 @@ static int tbnet_open(struct net_device *dev) } net->tx_ring.ring = ring; + hopid = tb_xdomain_alloc_out_hopid(xd, -1); + if (hopid < 0) { + netdev_err(dev, "failed to allocate Tx HopID\n"); + tb_ring_free(net->tx_ring.ring); + net->tx_ring.ring = NULL; + return hopid; + } + net->local_transmit_path = hopid; + sof_mask = BIT(TBIP_PDF_FRAME_START); eof_mask = BIT(TBIP_PDF_FRAME_END); @@ -893,6 +918,8 @@ static int tbnet_stop(struct net_device *dev) tb_ring_free(net->rx_ring.ring); net->rx_ring.ring = NULL; + + tb_xdomain_release_out_hopid(net->xd, net->local_transmit_path); tb_ring_free(net->tx_ring.ring); net->tx_ring.ring = NULL; diff --git a/drivers/thunderbolt/dma_test.c b/drivers/thunderbolt/dma_test.c index 6debaf5a6604..3bedecb236e0 100644 --- a/drivers/thunderbolt/dma_test.c +++ b/drivers/thunderbolt/dma_test.c @@ -13,7 +13,6 @@ #include #include -#define DMA_TEST_HOPID 8 #define DMA_TEST_TX_RING_SIZE 64 #define DMA_TEST_RX_RING_SIZE 256 #define DMA_TEST_FRAME_SIZE SZ_4K @@ -72,7 +71,9 @@ static const char * const dma_test_result_names[] = { * @svc: XDomain service the driver is bound to * @xd: XDomain the service belongs to * @rx_ring: Software ring holding RX frames + * @rx_hopid: HopID used for receiving frames * @tx_ring: Software ring holding TX frames + * @tx_hopid: HopID used for sending fames * @packets_to_send: Number of packets to send * @packets_to_receive: Number of packets to receive * @packets_sent: Actual number of packets sent @@ -92,7 +93,9 @@ struct dma_test { const struct tb_service *svc; struct tb_xdomain *xd; struct tb_ring *rx_ring; + int rx_hopid; struct tb_ring *tx_ring; + int tx_hopid; unsigned int packets_to_send; unsigned int packets_to_receive; unsigned int packets_sent; @@ -119,10 +122,12 @@ static void *dma_test_pattern; static void dma_test_free_rings(struct dma_test *dt) { if (dt->rx_ring) { + tb_xdomain_release_in_hopid(dt->xd, dt->rx_hopid); tb_ring_free(dt->rx_ring); dt->rx_ring = NULL; } if (dt->tx_ring) { + tb_xdomain_release_out_hopid(dt->xd, dt->tx_hopid); tb_ring_free(dt->tx_ring); dt->tx_ring = NULL; } @@ -151,6 +156,14 @@ static int dma_test_start_rings(struct dma_test *dt) dt->tx_ring = ring; e2e_tx_hop = ring->hop; + + ret = tb_xdomain_alloc_out_hopid(xd, -1); + if (ret < 0) { + dma_test_free_rings(dt); + return ret; + } + + dt->tx_hopid = ret; } if (dt->packets_to_receive) { @@ -168,11 +181,19 @@ static int dma_test_start_rings(struct dma_test *dt) } dt->rx_ring = ring; + + ret = tb_xdomain_alloc_in_hopid(xd, -1); + if (ret < 0) { + dma_test_free_rings(dt); + return ret; + } + + dt->rx_hopid = ret; } - ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID, + ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid, dt->tx_ring ? dt->tx_ring->hop : 0, - DMA_TEST_HOPID, + dt->rx_hopid, dt->rx_ring ? dt->rx_ring->hop : 0); if (ret) { dma_test_free_rings(dt); @@ -189,12 +210,18 @@ static int dma_test_start_rings(struct dma_test *dt) static void dma_test_stop_rings(struct dma_test *dt) { + int ret; + if (dt->rx_ring) tb_ring_stop(dt->rx_ring); if (dt->tx_ring) tb_ring_stop(dt->tx_ring); - if (tb_xdomain_disable_paths(dt->xd)) + ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid, + dt->tx_ring ? dt->tx_ring->hop : 0, + dt->rx_hopid, + dt->rx_ring ? dt->rx_ring->hop : 0); + if (ret) dev_warn(&dt->svc->dev, "failed to disable DMA paths\n"); dma_test_free_rings(dt); diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index 039486b61b6a..a7d83eec3d15 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -791,6 +791,10 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb) * tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain * @tb: Domain enabling the DMA paths * @xd: XDomain DMA paths are created to + * @transmit_path: HopID we are using to send out packets + * @transmit_ring: DMA ring used to send out packets + * @receive_path: HopID the other end is using to send packets to us + * @receive_ring: DMA ring used to receive packets from @receive_path * * Calls connection manager specific method to enable DMA paths to the * XDomain in question. @@ -799,18 +803,25 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb) * particular returns %-ENOTSUPP if the connection manager * implementation does not support XDomains. */ -int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { if (!tb->cm_ops->approve_xdomain_paths) return -ENOTSUPP; - return tb->cm_ops->approve_xdomain_paths(tb, xd); + return tb->cm_ops->approve_xdomain_paths(tb, xd, transmit_path, + transmit_ring, receive_path, receive_ring); } /** * tb_domain_disconnect_xdomain_paths() - Disable DMA paths for XDomain * @tb: Domain disabling the DMA paths * @xd: XDomain whose DMA paths are disconnected + * @transmit_path: HopID we are using to send out packets + * @transmit_ring: DMA ring used to send out packets + * @receive_path: HopID the other end is using to send packets to us + * @receive_ring: DMA ring used to receive packets from @receive_path * * Calls connection manager specific method to disconnect DMA paths to * the XDomain in question. @@ -819,12 +830,15 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) * particular returns %-ENOTSUPP if the connection manager * implementation does not support XDomains. */ -int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { if (!tb->cm_ops->disconnect_xdomain_paths) return -ENOTSUPP; - return tb->cm_ops->disconnect_xdomain_paths(tb, xd); + return tb->cm_ops->disconnect_xdomain_paths(tb, xd, transmit_path, + transmit_ring, receive_path, receive_ring); } static int disconnect_xdomain(struct device *dev, void *data) @@ -835,7 +849,7 @@ static int disconnect_xdomain(struct device *dev, void *data) xd = tb_to_xdomain(dev); if (xd && xd->tb == tb) - ret = tb_xdomain_disable_paths(xd); + ret = tb_xdomain_disable_all_paths(xd); return ret; } diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index c111b946c64d..2f30b816705a 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -557,7 +557,9 @@ static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw, return 0; } -static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { struct icm_fr_pkg_approve_xdomain_response reply; struct icm_fr_pkg_approve_xdomain request; @@ -568,10 +570,10 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) request.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT | xd->link; memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); - request.transmit_path = xd->transmit_path; - request.transmit_ring = xd->transmit_ring; - request.receive_path = xd->receive_path; - request.receive_ring = xd->receive_ring; + request.transmit_path = transmit_path; + request.transmit_ring = transmit_ring; + request.receive_path = receive_path; + request.receive_ring = receive_ring; memset(&reply, 0, sizeof(reply)); ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), @@ -585,7 +587,9 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) return 0; } -static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { u8 phy_port; u8 cmd; @@ -1122,7 +1126,9 @@ static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw, return 0; } -static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { struct icm_tr_pkg_approve_xdomain_response reply; struct icm_tr_pkg_approve_xdomain request; @@ -1132,10 +1138,10 @@ static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) request.hdr.code = ICM_APPROVE_XDOMAIN; request.route_hi = upper_32_bits(xd->route); request.route_lo = lower_32_bits(xd->route); - request.transmit_path = xd->transmit_path; - request.transmit_ring = xd->transmit_ring; - request.receive_path = xd->receive_path; - request.receive_ring = xd->receive_ring; + request.transmit_path = transmit_path; + request.transmit_ring = transmit_ring; + request.receive_path = receive_path; + request.receive_ring = receive_ring; memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); memset(&reply, 0, sizeof(reply)); @@ -1176,7 +1182,9 @@ static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd, return 0; } -static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { int ret; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 4b3947965856..7e6dc2b03bed 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1079,7 +1079,9 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) return 0; } -static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { struct tb_cm *tcm = tb_priv(tb); struct tb_port *nhi_port, *dst_port; @@ -1091,9 +1093,8 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI); mutex_lock(&tb->lock); - tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring, - xd->transmit_path, xd->receive_ring, - xd->receive_path); + tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path, + transmit_ring, receive_path, receive_ring); if (!tunnel) { mutex_unlock(&tb->lock); return -ENOMEM; @@ -1112,29 +1113,40 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) return 0; } -static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { - struct tb_port *dst_port; - struct tb_tunnel *tunnel; + struct tb_cm *tcm = tb_priv(tb); + struct tb_port *nhi_port, *dst_port; + struct tb_tunnel *tunnel, *n; struct tb_switch *sw; sw = tb_to_switch(xd->dev.parent); dst_port = tb_port_at(xd->route, sw); + nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI); - /* - * It is possible that the tunnel was already teared down (in - * case of cable disconnect) so it is fine if we cannot find it - * here anymore. - */ - tunnel = tb_find_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port); - tb_deactivate_and_free_tunnel(tunnel); + list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) { + if (!tb_tunnel_is_dma(tunnel)) + continue; + if (tunnel->src_port != nhi_port || tunnel->dst_port != dst_port) + continue; + + if (tb_tunnel_match_dma(tunnel, transmit_path, transmit_ring, + receive_path, receive_ring)) + tb_deactivate_and_free_tunnel(tunnel); + } } -static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring) { if (!xd->is_unplugged) { mutex_lock(&tb->lock); - __tb_disconnect_xdomain_paths(tb, xd); + __tb_disconnect_xdomain_paths(tb, xd, transmit_path, + transmit_ring, receive_path, + receive_ring); mutex_unlock(&tb->lock); } return 0; @@ -1210,12 +1222,12 @@ static void tb_handle_hotplug(struct work_struct *work) * tb_xdomain_remove() so setting XDomain as * unplugged here prevents deadlock if they call * tb_xdomain_disable_paths(). We will tear down - * the path below. + * all the tunnels below. */ xd->is_unplugged = true; tb_xdomain_remove(xd); port->xdomain = NULL; - __tb_disconnect_xdomain_paths(tb, xd); + __tb_disconnect_xdomain_paths(tb, xd, -1, -1, -1, -1); tb_xdomain_put(xd); tb_port_unconfigure_xdomain(port); } else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) { diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index ec8cdbc761fa..2af6d632e3d0 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -406,8 +406,12 @@ struct tb_cm_ops { int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, const u8 *challenge, u8 *response); int (*disconnect_pcie_paths)(struct tb *tb); - int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); - int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); + int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring); + int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring); int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata, u8 *status, const void *tx_data, size_t tx_data_len, void *rx_data, size_t rx_data_len); @@ -641,8 +645,12 @@ int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw); int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw); int tb_domain_disconnect_pcie_paths(struct tb *tb); -int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); -int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); +int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring); +int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, + int transmit_path, int transmit_ring, + int receive_path, int receive_ring); int tb_domain_disconnect_all_paths(struct tb *tb); static inline struct tb *tb_domain_get(struct tb *tb) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 2e7ec037a73e..e1979bed7146 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -815,28 +815,28 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits * @tb: Pointer to the domain structure * @nhi: Host controller port * @dst: Destination null port which the other domain is connected to - * @transmit_ring: NHI ring number used to send packets towards the - * other domain. Set to %0 if TX path is not needed. * @transmit_path: HopID used for transmitting packets - * @receive_ring: NHI ring number used to receive packets from the - * other domain. Set to %0 if RX path is not needed. + * @transmit_ring: NHI ring number used to send packets towards the + * other domain. Set to %-1 if TX path is not needed. * @receive_path: HopID used for receiving packets + * @receive_ring: NHI ring number used to receive packets from the + * other domain. Set to %-1 if RX path is not needed. * * Return: Returns a tb_tunnel on success or NULL on failure. */ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, - struct tb_port *dst, int transmit_ring, - int transmit_path, int receive_ring, - int receive_path) + struct tb_port *dst, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring) { struct tb_tunnel *tunnel; size_t npaths = 0, i = 0; struct tb_path *path; u32 credits; - if (receive_ring) + if (receive_ring > 0) npaths++; - if (transmit_ring) + if (transmit_ring > 0) npaths++; if (WARN_ON(!npaths)) @@ -851,7 +851,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, credits = tb_dma_credits(nhi); - if (receive_ring) { + if (receive_ring > 0) { path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX"); if (!path) { @@ -862,7 +862,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, tunnel->paths[i++] = path; } - if (transmit_ring) { + if (transmit_ring > 0) { path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX"); if (!path) { @@ -876,6 +876,66 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, return tunnel; } +/** + * tb_tunnel_match_dma() - Match DMA tunnel + * @tunnel: Tunnel to match + * @transmit_path: HopID used for transmitting packets. Pass %-1 to ignore. + * @transmit_ring: NHI ring number used to send packets towards the + * other domain. Pass %-1 to ignore. + * @receive_path: HopID used for receiving packets. Pass %-1 to ignore. + * @receive_ring: NHI ring number used to receive packets from the + * other domain. Pass %-1 to ignore. + * + * This function can be used to match specific DMA tunnel, if there are + * multiple DMA tunnels going through the same XDomain connection. + * Returns true if there is match and false otherwise. + */ +bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path, + int transmit_ring, int receive_path, int receive_ring) +{ + const struct tb_path *tx_path = NULL, *rx_path = NULL; + int i; + + if (!receive_ring || !transmit_ring) + return false; + + for (i = 0; i < tunnel->npaths; i++) { + const struct tb_path *path = tunnel->paths[i]; + + if (!path) + continue; + + if (tb_port_is_nhi(path->hops[0].in_port)) + tx_path = path; + else if (tb_port_is_nhi(path->hops[path->path_length - 1].out_port)) + rx_path = path; + } + + if (transmit_ring > 0 || transmit_path > 0) { + if (!tx_path) + return false; + if (transmit_ring > 0 && + (tx_path->hops[0].in_hop_index != transmit_ring)) + return false; + if (transmit_path > 0 && + (tx_path->hops[tx_path->path_length - 1].next_hop_index != transmit_path)) + return false; + } + + if (receive_ring > 0 || receive_path > 0) { + if (!rx_path) + return false; + if (receive_path > 0 && + (rx_path->hops[0].in_hop_index != receive_path)) + return false; + if (receive_ring > 0 && + (rx_path->hops[rx_path->path_length - 1].next_hop_index != receive_ring)) + return false; + } + + return true; +} + static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down) { int ret, up_max_rate, down_max_rate; diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h index 1d2a64eb060d..a66994fb4e60 100644 --- a/drivers/thunderbolt/tunnel.h +++ b/drivers/thunderbolt/tunnel.h @@ -70,9 +70,11 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, struct tb_port *out, int max_up, int max_down); struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, - struct tb_port *dst, int transmit_ring, - int transmit_path, int receive_ring, - int receive_path); + struct tb_port *dst, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring); +bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path, + int transmit_ring, int receive_path, int receive_ring); struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down); struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, struct tb_port *down, int max_up, diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index ab56757d7c24..b21d99d59412 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -1295,6 +1295,8 @@ static void tb_xdomain_release(struct device *dev) kfree(xd->local_property_block); tb_property_free_dir(xd->remote_properties); + ida_destroy(&xd->out_hopids); + ida_destroy(&xd->in_hopids); ida_destroy(&xd->service_ids); kfree(xd->local_uuid); @@ -1388,6 +1390,8 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent, xd->route = route; xd->local_max_hopid = down->config.max_in_hop_id; ida_init(&xd->service_ids); + ida_init(&xd->in_hopids); + ida_init(&xd->out_hopids); mutex_init(&xd->lock); INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid); INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties); @@ -1553,73 +1557,118 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable); /** - * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection + * tb_xdomain_alloc_in_hopid() - Allocate input HopID for tunneling * @xd: XDomain connection - * @transmit_path: HopID of the transmit path the other end is using to - * send packets - * @transmit_ring: DMA ring used to receive packets from the other end - * @receive_path: HopID of the receive path the other end is using to - * receive packets - * @receive_ring: DMA ring used to send packets to the other end - * - * The function enables DMA paths accordingly so that after successful - * return the caller can send and receive packets using high-speed DMA - * path. + * @hopid: Preferred HopID or %-1 for next available * - * Return: %0 in case of success and negative errno in case of error + * Returns allocated HopID or negative errno. Specifically returns + * %-ENOSPC if there are no more available HopIDs. Returned HopID is + * guaranteed to be within range supported by the input lane adapter. + * Call tb_xdomain_release_in_hopid() to release the allocated HopID. */ -int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, - u16 transmit_ring, u16 receive_path, - u16 receive_ring) +int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid) { - int ret; + if (hopid < 0) + hopid = TB_PATH_MIN_HOPID; + if (hopid < TB_PATH_MIN_HOPID || hopid > xd->local_max_hopid) + return -EINVAL; - mutex_lock(&xd->lock); + return ida_alloc_range(&xd->in_hopids, hopid, xd->local_max_hopid, + GFP_KERNEL); +} +EXPORT_SYMBOL_GPL(tb_xdomain_alloc_in_hopid); - if (xd->transmit_path) { - ret = xd->transmit_path == transmit_path ? 0 : -EBUSY; - goto exit_unlock; - } +/** + * tb_xdomain_alloc_out_hopid() - Allocate output HopID for tunneling + * @xd: XDomain connection + * @hopid: Preferred HopID or %-1 for next available + * + * Returns allocated HopID or negative errno. Specifically returns + * %-ENOSPC if there are no more available HopIDs. Returned HopID is + * guaranteed to be within range supported by the output lane adapter. + * Call tb_xdomain_release_in_hopid() to release the allocated HopID. + */ +int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid) +{ + if (hopid < 0) + hopid = TB_PATH_MIN_HOPID; + if (hopid < TB_PATH_MIN_HOPID || hopid > xd->remote_max_hopid) + return -EINVAL; - xd->transmit_path = transmit_path; - xd->transmit_ring = transmit_ring; - xd->receive_path = receive_path; - xd->receive_ring = receive_ring; + return ida_alloc_range(&xd->out_hopids, hopid, xd->remote_max_hopid, + GFP_KERNEL); +} +EXPORT_SYMBOL_GPL(tb_xdomain_alloc_out_hopid); - ret = tb_domain_approve_xdomain_paths(xd->tb, xd); +/** + * tb_xdomain_release_in_hopid() - Release input HopID + * @xd: XDomain connection + * @hopid: HopID to release + */ +void tb_xdomain_release_in_hopid(struct tb_xdomain *xd, int hopid) +{ + ida_free(&xd->in_hopids, hopid); +} +EXPORT_SYMBOL_GPL(tb_xdomain_release_in_hopid); -exit_unlock: - mutex_unlock(&xd->lock); +/** + * tb_xdomain_release_out_hopid() - Release output HopID + * @xd: XDomain connection + * @hopid: HopID to release + */ +void tb_xdomain_release_out_hopid(struct tb_xdomain *xd, int hopid) +{ + ida_free(&xd->out_hopids, hopid); +} +EXPORT_SYMBOL_GPL(tb_xdomain_release_out_hopid); - return ret; +/** + * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection + * @xd: XDomain connection + * @transmit_path: HopID we are using to send out packets + * @transmit_ring: DMA ring used to send out packets + * @receive_path: HopID the other end is using to send packets to us + * @receive_ring: DMA ring used to receive packets from @receive_path + * + * The function enables DMA paths accordingly so that after successful + * return the caller can send and receive packets using high-speed DMA + * path. If a transmit or receive path is not needed, pass %-1 for those + * parameters. + * + * Return: %0 in case of success and negative errno in case of error + */ +int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring) +{ + return tb_domain_approve_xdomain_paths(xd->tb, xd, transmit_path, + transmit_ring, receive_path, + receive_ring); } EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths); /** * tb_xdomain_disable_paths() - Disable DMA paths for XDomain connection * @xd: XDomain connection + * @transmit_path: HopID we are using to send out packets + * @transmit_ring: DMA ring used to send out packets + * @receive_path: HopID the other end is using to send packets to us + * @receive_ring: DMA ring used to receive packets from @receive_path * * This does the opposite of tb_xdomain_enable_paths(). After call to - * this the caller is not expected to use the rings anymore. + * this the caller is not expected to use the rings anymore. Passing %-1 + * as path/ring parameter means don't care. Normally the callers should + * pass the same values here as they do when paths are enabled. * * Return: %0 in case of success and negative errno in case of error */ -int tb_xdomain_disable_paths(struct tb_xdomain *xd) +int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring) { - int ret = 0; - - mutex_lock(&xd->lock); - if (xd->transmit_path) { - xd->transmit_path = 0; - xd->transmit_ring = 0; - xd->receive_path = 0; - xd->receive_ring = 0; - - ret = tb_domain_disconnect_xdomain_paths(xd->tb, xd); - } - mutex_unlock(&xd->lock); - - return ret; + return tb_domain_disconnect_xdomain_paths(xd->tb, xd, transmit_path, + transmit_ring, receive_path, + receive_ring); } EXPORT_SYMBOL_GPL(tb_xdomain_disable_paths); diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 3e0ce654d60c..e7c96c37174f 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -190,11 +190,9 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @is_unplugged: The XDomain is unplugged * @needs_uuid: If the XDomain does not have @remote_uuid it will be * queried first - * @transmit_path: HopID which the remote end expects us to transmit - * @transmit_ring: Local ring (hop) where outgoing packets are pushed - * @receive_path: HopID which we expect the remote end to transmit - * @receive_ring: Local ring (hop) where incoming packets arrive * @service_ids: Used to generate IDs for the services + * @in_hopids: Input HopIDs for DMA tunneling + * @out_hopids; Output HopIDs for DMA tunneling * @local_property_block: Local block of properties * @local_property_block_gen: Generation of @local_property_block * @local_property_block_len: Length of the @local_property_block in dwords @@ -238,11 +236,9 @@ struct tb_xdomain { unsigned int link_width; bool is_unplugged; bool needs_uuid; - u16 transmit_path; - u16 transmit_ring; - u16 receive_path; - u16 receive_ring; struct ida service_ids; + struct ida in_hopids; + struct ida out_hopids; u32 *local_property_block; u32 local_property_block_gen; u32 local_property_block_len; @@ -260,10 +256,22 @@ struct tb_xdomain { int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd); void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd); -int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, - u16 transmit_ring, u16 receive_path, - u16 receive_ring); -int tb_xdomain_disable_paths(struct tb_xdomain *xd); +int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid); +void tb_xdomain_release_in_hopid(struct tb_xdomain *xd, int hopid); +int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid); +void tb_xdomain_release_out_hopid(struct tb_xdomain *xd, int hopid); +int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring); +int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path, + int transmit_ring, int receive_path, + int receive_ring); + +static inline int tb_xdomain_disable_all_paths(struct tb_xdomain *xd) +{ + return tb_xdomain_disable_paths(xd, -1, -1, -1, -1); +} + struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid); struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route); From patchwork Thu Mar 4 12:31:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30EE3C43142 for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B33864F3E for ; Thu, 4 Mar 2021 12:34:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240832AbhCDMdY (ORCPT ); Thu, 4 Mar 2021 07:33:24 -0500 Received: from mga12.intel.com ([192.55.52.136]:64927 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240808AbhCDMdS (ORCPT ); Thu, 4 Mar 2021 07:33:18 -0500 IronPort-SDR: Tj9kfYEfnWIZm6+7CBpaBMRwz71MZuzU11XnKrDgUXva/FFuYEKgLdW8gqAhnUQBuFIJPDLyFC PaBFVmx25gsQ== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="166662653" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="166662653" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:33 -0800 IronPort-SDR: ly8Jdcv0Hl/ZhnrKFx+Y22W2mFysooOQBuqy8GrXrdZWmsK2wzcFZHy0BPzow5HmsrInS6MrBE +HnedhSBd/Vw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="518641969" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga004.jf.intel.com with ESMTP; 04 Mar 2021 04:31:30 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id BCE6650E; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 14/18] net: thunderbolt: Align the driver to the USB4 networking spec Date: Thu, 4 Mar 2021 15:31:21 +0300 Message-Id: <20210304123125.43630-15-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The USB4 networking spec (USB4NET) recommends different timeouts, and also suggest that the driver sets the 64k frame support flag in the properties block. Make the networking driver to honor this. Signed-off-by: Mika Westerberg --- drivers/net/thunderbolt.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c index 5c9ec91b6e78..9a6a8353e192 100644 --- a/drivers/net/thunderbolt.c +++ b/drivers/net/thunderbolt.c @@ -25,12 +25,13 @@ /* Protocol timeouts in ms */ #define TBNET_LOGIN_DELAY 4500 #define TBNET_LOGIN_TIMEOUT 500 -#define TBNET_LOGOUT_TIMEOUT 100 +#define TBNET_LOGOUT_TIMEOUT 1000 #define TBNET_RING_SIZE 256 #define TBNET_LOGIN_RETRIES 60 -#define TBNET_LOGOUT_RETRIES 5 +#define TBNET_LOGOUT_RETRIES 10 #define TBNET_MATCH_FRAGS_ID BIT(1) +#define TBNET_64K_FRAMES BIT(2) #define TBNET_MAX_MTU SZ_64K #define TBNET_FRAME_SIZE SZ_4K #define TBNET_MAX_PAYLOAD_SIZE \ @@ -1367,7 +1368,7 @@ static int __init tbnet_init(void) * the moment. */ tb_property_add_immediate(tbnet_dir, "prtcstns", - TBNET_MATCH_FRAGS_ID); + TBNET_MATCH_FRAGS_ID | TBNET_64K_FRAMES); ret = tb_register_property_dir("network", tbnet_dir); if (ret) { From patchwork Thu Mar 4 12:31:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC104C433E9 for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A495064F1E for ; Thu, 4 Mar 2021 12:34:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240946AbhCDMdz (ORCPT ); Thu, 4 Mar 2021 07:33:55 -0500 Received: from mga12.intel.com ([192.55.52.136]:64847 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240809AbhCDMd3 (ORCPT ); Thu, 4 Mar 2021 07:33:29 -0500 IronPort-SDR: UCdShMzxMrfuJvJyufYiGMvK0y22vlBI60KuGNi08/XWA+oVASEA1ydrCSX4dQJshavFw84RFw 02FuzShtNXhg== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="166662662" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="166662662" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:34 -0800 IronPort-SDR: PA852FyUm4SUbJIq7xAA+wPYsrycDVwjww+1UEcjDPAetqfagp/wR7aE5kXbMwnAo54PEqL+4e JzXcP0IZToKg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="435785761" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 04 Mar 2021 04:31:32 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id C853B5CC; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 15/18] thunderbolt: Add KUnit tests for XDomain properties Date: Thu, 4 Mar 2021 15:31:22 +0300 Message-Id: <20210304123125.43630-16-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This adds KUnit tests for parsing, formatting and copying of XDomain properties. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/test.c | 252 +++++++++++++++++++++++++++++++++++++ 1 file changed, 252 insertions(+) diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c index 464c2d37b992..4e1e7ae2d90d 100644 --- a/drivers/thunderbolt/test.c +++ b/drivers/thunderbolt/test.c @@ -1594,6 +1594,255 @@ static void tb_test_tunnel_port_on_path(struct kunit *test) tb_tunnel_free(dp_tunnel); } +static const u32 root_directory[] = { + 0x55584401, /* "UXD" v1 */ + 0x00000018, /* Root directory length */ + 0x76656e64, /* "vend" */ + 0x6f726964, /* "orid" */ + 0x76000001, /* "v" R 1 */ + 0x00000a27, /* Immediate value, ! Vendor ID */ + 0x76656e64, /* "vend" */ + 0x6f726964, /* "orid" */ + 0x74000003, /* "t" R 3 */ + 0x0000001a, /* Text leaf offset, (“Apple Inc.”) */ + 0x64657669, /* "devi" */ + 0x63656964, /* "ceid" */ + 0x76000001, /* "v" R 1 */ + 0x0000000a, /* Immediate value, ! Device ID */ + 0x64657669, /* "devi" */ + 0x63656964, /* "ceid" */ + 0x74000003, /* "t" R 3 */ + 0x0000001d, /* Text leaf offset, (“Macintosh”) */ + 0x64657669, /* "devi" */ + 0x63657276, /* "cerv" */ + 0x76000001, /* "v" R 1 */ + 0x80000100, /* Immediate value, Device Revision */ + 0x6e657477, /* "netw" */ + 0x6f726b00, /* "ork" */ + 0x44000014, /* "D" R 20 */ + 0x00000021, /* Directory data offset, (Network Directory) */ + 0x4170706c, /* "Appl" */ + 0x6520496e, /* "e In" */ + 0x632e0000, /* "c." ! */ + 0x4d616369, /* "Maci" */ + 0x6e746f73, /* "ntos" */ + 0x68000000, /* "h" */ + 0x00000000, /* padding */ + 0xca8961c6, /* Directory UUID, Network Directory */ + 0x9541ce1c, /* Directory UUID, Network Directory */ + 0x5949b8bd, /* Directory UUID, Network Directory */ + 0x4f5a5f2e, /* Directory UUID, Network Directory */ + 0x70727463, /* "prtc" */ + 0x69640000, /* "id" */ + 0x76000001, /* "v" R 1 */ + 0x00000001, /* Immediate value, Network Protocol ID */ + 0x70727463, /* "prtc" */ + 0x76657273, /* "vers" */ + 0x76000001, /* "v" R 1 */ + 0x00000001, /* Immediate value, Network Protocol Version */ + 0x70727463, /* "prtc" */ + 0x72657673, /* "revs" */ + 0x76000001, /* "v" R 1 */ + 0x00000001, /* Immediate value, Network Protocol Revision */ + 0x70727463, /* "prtc" */ + 0x73746e73, /* "stns" */ + 0x76000001, /* "v" R 1 */ + 0x00000000, /* Immediate value, Network Protocol Settings */ +}; + +static const uuid_t network_dir_uuid = + UUID_INIT(0xc66189ca, 0x1cce, 0x4195, + 0xbd, 0xb8, 0x49, 0x59, 0x2e, 0x5f, 0x5a, 0x4f); + +static void tb_test_property_parse(struct kunit *test) +{ + struct tb_property_dir *dir, *network_dir; + struct tb_property *p; + + dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); + KUNIT_ASSERT_TRUE(test, dir != NULL); + + p = tb_property_find(dir, "foo", TB_PROPERTY_TYPE_TEXT); + KUNIT_ASSERT_TRUE(test, !p); + + p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_TEXT); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_STREQ(test, p->value.text, "Apple Inc."); + + p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa27); + + p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_TEXT); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_STREQ(test, p->value.text, "Macintosh"); + + p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa); + + p = tb_property_find(dir, "missing", TB_PROPERTY_TYPE_DIRECTORY); + KUNIT_ASSERT_TRUE(test, !p); + + p = tb_property_find(dir, "network", TB_PROPERTY_TYPE_DIRECTORY); + KUNIT_ASSERT_TRUE(test, p != NULL); + + network_dir = p->value.dir; + KUNIT_EXPECT_TRUE(test, uuid_equal(network_dir->uuid, &network_dir_uuid)); + + p = tb_property_find(network_dir, "prtcid", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); + + p = tb_property_find(network_dir, "prtcvers", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); + + p = tb_property_find(network_dir, "prtcrevs", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1); + + p = tb_property_find(network_dir, "prtcstns", TB_PROPERTY_TYPE_VALUE); + KUNIT_ASSERT_TRUE(test, p != NULL); + KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x0); + + p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_VALUE); + KUNIT_EXPECT_TRUE(test, !p); + p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_TEXT); + KUNIT_EXPECT_TRUE(test, !p); + + tb_property_free_dir(dir); +} + +static void tb_test_property_format(struct kunit *test) +{ + struct tb_property_dir *dir; + ssize_t block_len; + u32 *block; + int ret, i; + + dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); + KUNIT_ASSERT_TRUE(test, dir != NULL); + + ret = tb_property_format_dir(dir, NULL, 0); + KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory)); + + block_len = ret; + + block = kunit_kzalloc(test, block_len * sizeof(u32), GFP_KERNEL); + KUNIT_ASSERT_TRUE(test, block != NULL); + + ret = tb_property_format_dir(dir, block, block_len); + KUNIT_EXPECT_EQ(test, ret, 0); + + for (i = 0; i < ARRAY_SIZE(root_directory); i++) + KUNIT_EXPECT_EQ(test, root_directory[i], block[i]); + + tb_property_free_dir(dir); +} + +static void compare_dirs(struct kunit *test, struct tb_property_dir *d1, + struct tb_property_dir *d2) +{ + struct tb_property *p1, *p2, *tmp; + int n1, n2, i; + + if (d1->uuid) { + KUNIT_ASSERT_TRUE(test, d2->uuid != NULL); + KUNIT_ASSERT_TRUE(test, uuid_equal(d1->uuid, d2->uuid)); + } else { + KUNIT_ASSERT_TRUE(test, d2->uuid == NULL); + } + + n1 = 0; + tb_property_for_each(d1, tmp) + n1++; + KUNIT_ASSERT_NE(test, n1, 0); + + n2 = 0; + tb_property_for_each(d2, tmp) + n2++; + KUNIT_ASSERT_NE(test, n2, 0); + + KUNIT_ASSERT_EQ(test, n1, n2); + + p1 = NULL; + p2 = NULL; + for (i = 0; i < n1; i++) { + p1 = tb_property_get_next(d1, p1); + KUNIT_ASSERT_TRUE(test, p1 != NULL); + p2 = tb_property_get_next(d2, p2); + KUNIT_ASSERT_TRUE(test, p2 != NULL); + + KUNIT_ASSERT_STREQ(test, &p1->key[0], &p2->key[0]); + KUNIT_ASSERT_EQ(test, p1->type, p2->type); + KUNIT_ASSERT_EQ(test, p1->length, p2->length); + + switch (p1->type) { + case TB_PROPERTY_TYPE_DIRECTORY: + KUNIT_ASSERT_TRUE(test, p1->value.dir != NULL); + KUNIT_ASSERT_TRUE(test, p2->value.dir != NULL); + compare_dirs(test, p1->value.dir, p2->value.dir); + break; + + case TB_PROPERTY_TYPE_DATA: + KUNIT_ASSERT_TRUE(test, p1->value.data != NULL); + KUNIT_ASSERT_TRUE(test, p2->value.data != NULL); + KUNIT_ASSERT_TRUE(test, + !memcmp(p1->value.data, p2->value.data, + p1->length * 4) + ); + break; + + case TB_PROPERTY_TYPE_TEXT: + KUNIT_ASSERT_TRUE(test, p1->value.text != NULL); + KUNIT_ASSERT_TRUE(test, p2->value.text != NULL); + KUNIT_ASSERT_STREQ(test, p1->value.text, p2->value.text); + break; + + case TB_PROPERTY_TYPE_VALUE: + KUNIT_ASSERT_EQ(test, p1->value.immediate, + p2->value.immediate); + break; + default: + KUNIT_FAIL(test, "unexpected property type"); + break; + } + } +} + +static void tb_test_property_copy(struct kunit *test) +{ + struct tb_property_dir *src, *dst; + u32 *block; + int ret, i; + + src = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory)); + KUNIT_ASSERT_TRUE(test, src != NULL); + + dst = tb_property_copy_dir(src); + KUNIT_ASSERT_TRUE(test, dst != NULL); + + /* Compare the structures */ + compare_dirs(test, src, dst); + + /* Compare the resulting property block */ + ret = tb_property_format_dir(dst, NULL, 0); + KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory)); + + block = kunit_kzalloc(test, sizeof(root_directory), GFP_KERNEL); + KUNIT_ASSERT_TRUE(test, block != NULL); + + ret = tb_property_format_dir(dst, block, ARRAY_SIZE(root_directory)); + KUNIT_EXPECT_TRUE(test, !ret); + + for (i = 0; i < ARRAY_SIZE(root_directory); i++) + KUNIT_EXPECT_EQ(test, root_directory[i], block[i]); + + tb_property_free_dir(dst); + tb_property_free_dir(src); +} + static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_path_basic), KUNIT_CASE(tb_test_path_not_connected_walk), @@ -1616,6 +1865,9 @@ static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_tunnel_dp_max_length), KUNIT_CASE(tb_test_tunnel_port_on_path), KUNIT_CASE(tb_test_tunnel_usb3), + KUNIT_CASE(tb_test_property_parse), + KUNIT_CASE(tb_test_property_format), + KUNIT_CASE(tb_test_property_copy), { } }; From patchwork Thu Mar 4 12:31:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBF0DC4332E for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0D6A64F4A for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240977AbhCDMe7 (ORCPT ); Thu, 4 Mar 2021 07:34:59 -0500 Received: from mga01.intel.com ([192.55.52.88]:37600 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241001AbhCDMei (ORCPT ); Thu, 4 Mar 2021 07:34:38 -0500 IronPort-SDR: P4kUpiV+7I4SoaBiOnsotBaQn+tqt4tnT4AIVSqVX/6QpdvJeo6QBrOR23v1H1nb2u8Gj5fw8P LVYnP5noZJYg== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="207113165" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="207113165" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:34 -0800 IronPort-SDR: +3I0QyypeCV7zhAfr/KVUo5nVfl2jzuvP903v3uhyovntpagFZVjYZJaCaR63Km/JOIj3WDbPH dxjw1Gt/O0jQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="374534706" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 04:31:32 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id D41C860B; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 16/18] thunderbolt: Add KUnit tests for DMA tunnels Date: Thu, 4 Mar 2021 15:31:23 +0300 Message-Id: <20210304123125.43630-17-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Add a couple of tests to check DMA tunneling functionality. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/test.c | 240 +++++++++++++++++++++++++++++++++++++ 1 file changed, 240 insertions(+) diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c index 4e1e7ae2d90d..5ff5a03bc9ce 100644 --- a/drivers/thunderbolt/test.c +++ b/drivers/thunderbolt/test.c @@ -119,6 +119,7 @@ static struct tb_switch *alloc_host(struct kunit *test) sw->ports[7].config.type = TB_TYPE_NHI; sw->ports[7].config.max_in_hop_id = 11; sw->ports[7].config.max_out_hop_id = 11; + sw->ports[7].config.nfc_credits = 0x41800000; sw->ports[8].config.type = TB_TYPE_PCIE_DOWN; sw->ports[8].config.max_in_hop_id = 8; @@ -1594,6 +1595,240 @@ static void tb_test_tunnel_port_on_path(struct kunit *test) tb_tunnel_free(dp_tunnel); } +static void tb_test_tunnel_dma(struct kunit *test) +{ + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + struct tb_switch *host; + + /* + * Create DMA tunnel from NHI to port 1 and back. + * + * [Host 1] + * 1 ^ In HopID 1 -> Out HopID 8 + * | + * v In HopID 8 -> Out HopID 1 + * ............ Domain border + * | + * [Host 2] + */ + host = alloc_host(test); + nhi = &host->ports[7]; + port = &host->ports[1]; + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + /* RX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 1); + /* TX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].out_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].next_hop_index, 8); + + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dma_rx(struct kunit *test) +{ + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + struct tb_switch *host; + + /* + * Create DMA RX tunnel from port 1 to NHI. + * + * [Host 1] + * 1 ^ + * | + * | In HopID 15 -> Out HopID 2 + * ............ Domain border + * | + * [Host 2] + */ + host = alloc_host(test); + nhi = &host->ports[7]; + port = &host->ports[1]; + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 2); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1); + /* RX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 15); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 2); + + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dma_tx(struct kunit *test) +{ + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + struct tb_switch *host; + + /* + * Create DMA TX tunnel from NHI to port 1. + * + * [Host 1] + * 1 | In HopID 2 -> Out HopID 15 + * | + * v + * ............ Domain border + * | + * [Host 2] + */ + host = alloc_host(test); + nhi = &host->ports[7]; + port = &host->ports[1]; + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 2, -1, -1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1); + /* TX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 2); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 15); + + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dma_chain(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2; + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + + /* + * Create DMA tunnel from NHI to Device #2 port 3 and back. + * + * [Host 1] + * 1 ^ In HopID 1 -> Out HopID x + * | + * 1 | In HopID x -> Out HopID 1 + * [Device #1] + * 7 \ + * 1 \ + * [Device #2] + * 3 | In HopID x -> Out HopID 8 + * | + * v In HopID 8 -> Out HopID x + * ............ Domain border + * | + * [Host 2] + */ + host = alloc_host(test); + dev1 = alloc_dev_default(test, host, 0x1, true); + dev2 = alloc_dev_default(test, dev1, 0x701, true); + + nhi = &host->ports[7]; + port = &dev2->ports[3]; + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA); + KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi); + KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port); + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); + /* RX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, + &dev2->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].in_port, + &dev1->ports[7]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port, + &dev1->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].in_port, + &host->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[2].next_hop_index, 1); + /* TX path */ + KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi); + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].in_port, + &dev1->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port, + &dev1->ports[7]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].in_port, + &dev2->ports[1]); + KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, port); + KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[2].next_hop_index, 8); + + tb_tunnel_free(tunnel); +} + +static void tb_test_tunnel_dma_match(struct kunit *test) +{ + struct tb_port *nhi, *port; + struct tb_tunnel *tunnel; + struct tb_switch *host; + + host = alloc_host(test); + nhi = &host->ports[7]; + port = &host->ports[1]; + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, 15, 1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, 1, 15, 1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, -1, 8, -1)); + + tb_tunnel_free(tunnel); + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, -1, -1); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1)); + + tb_tunnel_free(tunnel); + + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 11); + KUNIT_ASSERT_TRUE(test, tunnel != NULL); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 11)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 11)); + KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 10, 11)); + KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1)); + + tb_tunnel_free(tunnel); +} + static const u32 root_directory[] = { 0x55584401, /* "UXD" v1 */ 0x00000018, /* Root directory length */ @@ -1865,6 +2100,11 @@ static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_tunnel_dp_max_length), KUNIT_CASE(tb_test_tunnel_port_on_path), KUNIT_CASE(tb_test_tunnel_usb3), + KUNIT_CASE(tb_test_tunnel_dma), + KUNIT_CASE(tb_test_tunnel_dma_rx), + KUNIT_CASE(tb_test_tunnel_dma_tx), + KUNIT_CASE(tb_test_tunnel_dma_chain), + KUNIT_CASE(tb_test_tunnel_dma_match), KUNIT_CASE(tb_test_property_parse), KUNIT_CASE(tb_test_property_format), KUNIT_CASE(tb_test_property_copy), From patchwork Thu Mar 4 12:31:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 393299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76C98C433DB for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 424B064F2C for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241026AbhCDMe6 (ORCPT ); Thu, 4 Mar 2021 07:34:58 -0500 Received: from mga01.intel.com ([192.55.52.88]:37596 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240964AbhCDMef (ORCPT ); Thu, 4 Mar 2021 07:34:35 -0500 IronPort-SDR: LI6dklMOIzTACjdv/k0/OfC2un8CMOcs2aCW9hM9yN5I0UJxXq4jMe9le+kWPjrkBEqqkkBc3w fovT2VzKm2ug== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="207113164" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="207113164" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:34 -0800 IronPort-SDR: 93TnYZct3/Ao/8+aX6fbc7HzzS1395Lkkl+Rr1HtyarOnlss+CNFw1sM/LA4ij+6FIU9W8YZJM nBL6hOpL4rpg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="374534704" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 04:31:32 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id DF4B4670; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 17/18] thunderbolt: Check quirks in tb_switch_add() Date: Thu, 4 Mar 2021 15:31:24 +0300 Message-Id: <20210304123125.43630-18-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This makes it more visible on the main path of adding router. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/eeprom.c | 1 - drivers/thunderbolt/switch.c | 2 ++ 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index dd03d3096653..aecb0b9f0c75 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -610,7 +610,6 @@ int tb_drom_read(struct tb_switch *sw) sw->uid = header->uid; sw->vendor = header->vendor_id; sw->device = header->model_id; - tb_check_quirks(sw); crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); if (crc != header->data_crc32) { diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 7ac37a1f95e1..72b43c7c0651 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2520,6 +2520,8 @@ int tb_switch_add(struct tb_switch *sw) } tb_sw_dbg(sw, "uid: %#llx\n", sw->uid); + tb_check_quirks(sw); + ret = tb_switch_set_uuid(sw); if (ret) { dev_err(&sw->dev, "failed to set UUID\n"); From patchwork Thu Mar 4 12:31:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 394135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7679C4332B for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9210564F11 for ; Thu, 4 Mar 2021 12:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240967AbhCDMe6 (ORCPT ); Thu, 4 Mar 2021 07:34:58 -0500 Received: from mga05.intel.com ([192.55.52.43]:28648 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240994AbhCDMeh (ORCPT ); Thu, 4 Mar 2021 07:34:37 -0500 IronPort-SDR: m95GBbiwC0W0gHdcBPoWxGphA/IvLfb8Vds3UV5EEbNyLp6wCAToRuxRdvc50NmE6qGRryBjtx SdsNX+5cbkyw== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="272407045" X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="272407045" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 04:31:35 -0800 IronPort-SDR: xCTp1s/OrcjpiNBQkt4NjSf5kzeKZ40lCf2WTpjO9Oix4ZjW88x69RMC43EUkofohO1j4DkNQ5 mNvPCyOAyI1Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,222,1610438400"; d="scan'208";a="600508115" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 04 Mar 2021 04:31:32 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id EA9B676E; Thu, 4 Mar 2021 14:31:26 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Mika Westerberg , Lukas Wunner , "David S . Miller" , Jakub Kicinski , netdev@vger.kernel.org Subject: [PATCH 18/18] thunderbolt: Add support for USB4 DROM Date: Thu, 4 Mar 2021 15:31:25 +0300 Message-Id: <20210304123125.43630-19-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210304123125.43630-1-mika.westerberg@linux.intel.com> References: <20210304123125.43630-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 router DROM differs sligthly from Thunderbolt 1-3 DROM. For instance it does not include UID and CRC8 in the header section, and it has product descriptor genereric entry to describe the product IDs and related information. If the "Version" field in the DROM header section reads 3 it means the router only has USB4 DROM and if it reads 1 it means the router supports TBT3 compatible DROM. For this reason, update the DROM parsing code to support "pure" USB4 DROMs too. While there drop the extra empty line at the end of tb_drom_read(). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/eeprom.c | 104 +++++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 24 deletions(-) diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index aecb0b9f0c75..46d0906a3070 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -277,6 +277,16 @@ struct tb_drom_entry_port { u8 unknown4:2; } __packed; +/* USB4 product descriptor */ +struct tb_drom_entry_desc { + struct tb_drom_entry_header header; + u16 bcdUSBSpec; + u16 idVendor; + u16 idProduct; + u16 bcdProductFWRevision; + u32 TID; + u8 productHWRevision; +}; /** * tb_drom_read_uid_only() - Read UID directly from DROM @@ -329,6 +339,16 @@ static int tb_drom_parse_entry_generic(struct tb_switch *sw, if (!sw->device_name) return -ENOMEM; break; + case 9: { + const struct tb_drom_entry_desc *desc = + (const struct tb_drom_entry_desc *)entry; + + if (!sw->vendor && !sw->device) { + sw->vendor = desc->idVendor; + sw->device = desc->idProduct; + } + break; + } } return 0; @@ -521,6 +541,51 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val, return tb_eeprom_read_n(sw, offset, val, count); } +static int tb_drom_parse(struct tb_switch *sw) +{ + const struct tb_drom_header *header = + (const struct tb_drom_header *)sw->drom; + u32 crc; + + crc = tb_crc8((u8 *) &header->uid, 8); + if (crc != header->uid_crc8) { + tb_sw_warn(sw, + "DROM UID CRC8 mismatch (expected: %#x, got: %#x), aborting\n", + header->uid_crc8, crc); + return -EINVAL; + } + if (!sw->uid) + sw->uid = header->uid; + sw->vendor = header->vendor_id; + sw->device = header->model_id; + + crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); + if (crc != header->data_crc32) { + tb_sw_warn(sw, + "DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n", + header->data_crc32, crc); + } + + return tb_drom_parse_entries(sw); +} + +static int usb4_drom_parse(struct tb_switch *sw) +{ + const struct tb_drom_header *header = + (const struct tb_drom_header *)sw->drom; + u32 crc; + + crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); + if (crc != header->data_crc32) { + tb_sw_warn(sw, + "DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n", + header->data_crc32, crc); + return -EINVAL; + } + + return tb_drom_parse_entries(sw); +} + /** * tb_drom_read() - Copy DROM to sw->drom and parse it * @sw: Router whose DROM to read and parse @@ -534,7 +599,6 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val, int tb_drom_read(struct tb_switch *sw) { u16 size; - u32 crc; struct tb_drom_header *header; int res, retries = 1; @@ -599,30 +663,21 @@ int tb_drom_read(struct tb_switch *sw) goto err; } - crc = tb_crc8((u8 *) &header->uid, 8); - if (crc != header->uid_crc8) { - tb_sw_warn(sw, - "drom uid crc8 mismatch (expected: %#x, got: %#x), aborting\n", - header->uid_crc8, crc); - goto err; - } - if (!sw->uid) - sw->uid = header->uid; - sw->vendor = header->vendor_id; - sw->device = header->model_id; + tb_sw_dbg(sw, "DROM version: %d\n", header->device_rom_revision); - crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); - if (crc != header->data_crc32) { - tb_sw_warn(sw, - "drom data crc32 mismatch (expected: %#x, got: %#x), continuing\n", - header->data_crc32, crc); + switch (header->device_rom_revision) { + case 3: + res = usb4_drom_parse(sw); + break; + default: + tb_sw_warn(sw, "DROM device_rom_revision %#x unknown\n", + header->device_rom_revision); + fallthrough; + case 1: + res = tb_drom_parse(sw); + break; } - if (header->device_rom_revision > 2) - tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n", - header->device_rom_revision); - - res = tb_drom_parse_entries(sw); /* If the DROM parsing fails, wait a moment and retry once */ if (res == -EILSEQ && retries--) { tb_sw_warn(sw, "parsing DROM failed, retrying\n"); @@ -632,10 +687,11 @@ int tb_drom_read(struct tb_switch *sw) goto parse; } - return res; + if (!res) + return 0; + err: kfree(sw->drom); sw->drom = NULL; return -EIO; - }