From patchwork Mon May 29 10:04:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C172C7EE2E for ; Mon, 29 May 2023 10:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231919AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231912AbjE2KEc (ORCPT ); Mon, 29 May 2023 06:04:32 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64F0AC7 for ; Mon, 29 May 2023 03:04:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354665; x=1716890665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aeEjKOCY7V43+0gX+kNnDgo8aWHhA9v8g1czZssCKbM=; b=RDRJXZ+bot52FH44J5RluulusDBcgHmSzmU4GT4HzezEDvwZTtNK6gts NLK81KBcUFnN2fBRk/FkG14SODT0wy2UeJN2vC0Wg6uwinns3qfAj3e+u 34DD0gbfrH8VGppWdmd+TFAlUglsrOI8bGQRy9LIC3H8aMqDU6T81JKmY IxkD6c8qWVOvsyNKH6ZVFWb4QCdQ9nMvGQJCJkJoHdNLLjvkxOhqJOiHI MsymXt7LmjFgsRjiRV2EAVhRmE+e3RSSXLNhdj+MkkJ+4X/3+RUpid6Kl cyK+OtwHdgwfzawdrdP6zngHfTH04uJsx+RU9yyNMjMhrHosGu6POMifV g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684412" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684412" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518284" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518284" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id D3FD753A; Mon, 29 May 2023 13:04:25 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port() Date: Mon, 29 May 2023 13:04:07 +0300 Message-Id: <20230529100425.6125-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In the same way we did for the routers add a function that returns the parent routers downstream facing port for XDomain devices. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.h | 11 +++++++++++ drivers/thunderbolt/xdomain.c | 16 +++++++--------- 2 files changed, 18 insertions(+), 9 deletions(-) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index beaeea679e10..797d8bb73bfa 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1197,6 +1197,17 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) return tb_to_switch(xd->dev.parent); } +/** + * tb_xdomain_downstream_port() - Return downstream facing port of parent router + * @xd: Xdomain pointer + * + * Returns the downstream port the XDomain is connected to. + */ +static inline struct tb_port *tb_xdomain_downstream_port(struct tb_xdomain *xd) +{ + return tb_port_at(xd->route, tb_xdomain_parent(xd)); +} + int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf, size_t size); int tb_retimer_scan(struct tb_port *port, bool add); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index e2b54887d331..8389961b2d45 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -537,9 +537,8 @@ static int tb_xdp_link_state_status_request(struct tb_ctl *ctl, u64 route, static int tb_xdp_link_state_status_response(struct tb *tb, struct tb_ctl *ctl, struct tb_xdomain *xd, u8 sequence) { - struct tb_switch *sw = tb_to_switch(xd->dev.parent); struct tb_xdp_link_state_status_response res; - struct tb_port *port = tb_port_at(xd->route, sw); + struct tb_port *port = tb_xdomain_downstream_port(xd); u32 val[2]; int ret; @@ -1137,7 +1136,7 @@ static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd) struct tb_port *port; int ret; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); ret = tb_port_get_link_speed(port); if (ret < 0) @@ -1251,8 +1250,7 @@ static int tb_xdomain_get_link_status(struct tb_xdomain *xd) static int tb_xdomain_link_state_change(struct tb_xdomain *xd, unsigned int width) { - struct tb_switch *sw = tb_to_switch(xd->dev.parent); - struct tb_port *port = tb_port_at(xd->route, sw); + struct tb_port *port = tb_xdomain_downstream_port(xd); struct tb *tb = xd->tb; u8 tlw, tls; u32 val; @@ -1309,7 +1307,7 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd) return -ETIMEDOUT; } - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); /* * We can't use tb_xdomain_lane_bonding_enable() here because it @@ -1425,7 +1423,7 @@ static int tb_xdomain_get_properties(struct tb_xdomain *xd) if (xd->bonding_possible) { struct tb_port *port; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (!port->bonded) tb_port_disable(port->dual_link_port); } @@ -1979,7 +1977,7 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) struct tb_port *port; int ret; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (!port->dual_link_port) return -ENODEV; @@ -2024,7 +2022,7 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) { struct tb_port *port; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (port->dual_link_port) { tb_port_lane_bonding_disable(port); if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) From patchwork Mon May 29 10:04:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46964C7EE2F for ; Mon, 29 May 2023 10:04:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231935AbjE2KEj (ORCPT ); Mon, 29 May 2023 06:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230165AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5DA6C7 for ; Mon, 29 May 2023 03:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354674; x=1716890674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kQTOYUoMDmr5dcQi06Bmkoj6RO0YAh9/GuMM/UdkwFQ=; b=ZkLarrB3T8LV4JIWVmVqV8ImZM5EG2ViGSl7q7cCbQpdpoujHIDrmy6E SISqjvs6wujTgX9YCgpkYH93/HYVvdUaeJTIJnQLXftSSY2TBwnO2bpJT 1+sBVFWpxB3EcX9K7lHRtGIB8sKcuhHYuQ8wKGEG+Y486GG98aAOi5+DO ArSrpF/oDXt/lx0wVxPbTwUbmoCNDSMyWswdxUbvNPYs1zzzghHTO2c+n mCMG3yFk7flcP1cdLgnQNLePPUKQlEplxcLRxeLePE8X1yjdJgZ/6z3mA VChrufdWuKF/XNe2tWEDXiqiTqNaWrqutzt/6q0JqVaxi6tF7083hLrBi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684429" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684429" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518454" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518454" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 03738589; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function Date: Mon, 29 May 2023 13:04:10 +0300 Message-Id: <20230529100425.6125-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Now this is split into two with one having a misleading name (tb_switch_tmu_unidirectional_enable()). Make this easier to read, rename and consolidate the two functions into one with name that explains what it actually does. Use the two constants as well that were added but never used to make it clear which bits are being set. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tmu.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 5d508ea8baa5..30f18806abb7 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -503,8 +503,10 @@ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) return ret; } -static int tb_switch_tmu_objection_mask(struct tb_switch *sw) +/* Only needed for Titan Ridge */ +static int tb_switch_tmu_disable_objections(struct tb_switch *sw) { + struct tb_port *up = tb_upstream_port(sw); u32 val; int ret; @@ -515,17 +517,15 @@ static int tb_switch_tmu_objection_mask(struct tb_switch *sw) val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK; - return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); -} - -static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw) -{ - struct tb_port *up = tb_upstream_port(sw); + ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); + if (ret) + return ret; return tb_port_tmu_write(up, TMU_ADP_CS_6, TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK, - TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK); + TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 | + TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2); } /* @@ -670,11 +670,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) if (!tb_switch_is_clx_enabled(sw, TB_CL1)) return -EOPNOTSUPP; - ret = tb_switch_tmu_objection_mask(sw); - if (ret) - return ret; - - ret = tb_switch_tmu_unidirectional_enable(sw); + ret = tb_switch_tmu_disable_objections(sw); if (ret) return ret; } From patchwork Mon May 29 10:04:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50C71C7EE29 for ; Mon, 29 May 2023 10:04:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231926AbjE2KEh (ORCPT ); Mon, 29 May 2023 06:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231910AbjE2KEe (ORCPT ); Mon, 29 May 2023 06:04:34 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B89BEDC for ; Mon, 29 May 2023 03:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354672; x=1716890672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+2ayWDfJL/O7lNBfeF1TJTPlsjPZjczFxtUoiT4QpG0=; b=WjKjoLbKfQ2KmvRneCZiPZSiwSLfshqrWA3CY8bbeNmbOs+zAbeq058k 0unhDjcgC2o6E9U5j6Njs/AFf/feXa3kAxeykolos9peVHHXHJDn6TMVV OoNSR0B8eD3Ufgs9sUME9UGB6MyLZPE2jmfDBKn4ZAQeid7Ugj+0alDIU Gnky2q2KqXPqCDl9cT1N5fqH2xhNiujvwfjT7PeVvUQmkguneFjXf/luG JWk+9VCiGuu/6cT6ERUpp/a8DZg5eOgBOqP9Y0PKWhwPi4ZhbLZxsQy/0 clhewH0BYv572uPgJZqK8JPrsBwRadjpHMaT2trtJYYLp75axH9sr3J+X Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684424" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684424" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518432" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518432" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 12A875FD; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child() Date: Mon, 29 May 2023 13:04:11 +0300 Message-Id: <20230529100425.6125-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This is better to be part of the software connection manager flows in tb.c. Also name the new function tb_increase_tmu_accuracy() to match what it actually does. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 43 +++++++++++++++++++++++++++++++-------- drivers/thunderbolt/tb.h | 2 -- drivers/thunderbolt/tmu.c | 29 -------------------------- 3 files changed, 34 insertions(+), 40 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 72041e29e544..39ec7094fe17 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -240,6 +240,38 @@ static void tb_discover_dp_resources(struct tb *tb) } } +static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) +{ + struct tb_switch *sw; + + sw = tb_to_switch(dev); + if (sw) { + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, + tb_switch_is_clx_enabled(sw, TB_CL1)); + if (tb_switch_tmu_enable(sw)) + tb_sw_warn(sw, "failed to increase TMU rate\n"); + } + + return 0; +} + +static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel) +{ + struct tb_switch *sw; + + if (!tunnel) + return; + + /* + * Once first DP tunnel is established we change the TMU + * accuracy of first depth child routers (and the host router) + * to the highest. This is needed for the DP tunneling to work + * but also allows CL0s. + */ + sw = tunnel->tb->root_switch; + device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy); +} + static void tb_switch_discover_tunnels(struct tb_switch *sw, struct list_head *list, bool alloc_hopids) @@ -253,13 +285,7 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw, switch (port->config.type) { case TB_TYPE_DP_HDMI_IN: tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids); - /* - * In case of DP tunnel exists, change host router's - * 1st children TMU mode to HiFi for CL0s to work. - */ - if (tunnel) - tb_switch_enable_tmu_1st_child(tb->root_switch, - TB_SWITCH_TMU_RATE_HIFI); + tb_increase_tmu_accuracy(tunnel); break; case TB_TYPE_PCIE_DOWN: @@ -1263,8 +1289,7 @@ static void tb_tunnel_dp(struct tb *tb) * In case of DP tunnel exists, change host router's 1st children * TMU mode to HiFi for CL0s to work. */ - tb_switch_enable_tmu_1st_child(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI); - + tb_increase_tmu_accuracy(tunnel); return; err_free: diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 0ac653bfd97e..8cc64b79f35c 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -990,8 +990,6 @@ int tb_switch_tmu_enable(struct tb_switch *sw); void tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, bool unidirectional); -void tb_switch_enable_tmu_1st_child(struct tb_switch *sw, - enum tb_switch_tmu_rate rate); /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 30f18806abb7..84abb783a6d9 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -731,32 +731,3 @@ void tb_switch_tmu_configure(struct tb_switch *sw, sw->tmu.unidirectional_request = unidirectional; sw->tmu.rate_request = rate; } - -static int tb_switch_tmu_config_enable(struct device *dev, void *rate) -{ - if (tb_is_switch(dev)) { - struct tb_switch *sw = tb_to_switch(dev); - - tb_switch_tmu_configure(sw, *(enum tb_switch_tmu_rate *)rate, - tb_switch_is_clx_enabled(sw, TB_CL1)); - if (tb_switch_tmu_enable(sw)) - tb_sw_dbg(sw, "fail switching TMU mode for 1st depth router\n"); - } - - return 0; -} - -/** - * tb_switch_enable_tmu_1st_child - Configure and enable TMU for 1st chidren - * @sw: The router to configure and enable it's children TMU - * @rate: Rate of the TMU to configure the router's chidren to - * - * Configures and enables the TMU mode of 1st depth children of the specified - * router to the specified rate. - */ -void tb_switch_enable_tmu_1st_child(struct tb_switch *sw, - enum tb_switch_tmu_rate rate) -{ - device_for_each_child(&sw->dev, &rate, - tb_switch_tmu_config_enable); -} From patchwork Mon May 29 10:04:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66A83C7EE29 for ; Mon, 29 May 2023 10:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231927AbjE2KEg (ORCPT ); Mon, 29 May 2023 06:04:36 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38B71C9 for ; Mon, 29 May 2023 03:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354675; x=1716890675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Y5z3g5TftJxMhLHmZWlksqIqcdpHEd4zPzYPV21f8A=; b=HmW9naQz1wDXo2duxOUGTgz/Vxy9ykXPaWZBxVpA+Oj8W+v/4BQxfLv0 CYo8A64sI03H+w5+FyRtPaw62SEOIkBJ9psxl6urcIQdRKDoVg9eAGYu8 V88NTO9kMSfdX3gjga/Lnhf83bT9pDB+t05D/2UkevRfz/p/M/C6OTTM2 bOyJNWUAMU1r0Z+cpRAPLwHOBDbjlY7oWesL9T2LTZvXqi3HhCXSziMeL tO8ZIKyE19WIMu0qCYpox3BsL2RXYA5JuYd87pTLE2FLsI6Uuf5fn1yY+ 4vPKqfeDgl2CES+vnFgL/5DLZZmfREat+A4hO5rBvhK9I1GxTOzU/ZdL+ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684434" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684434" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518465" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518465" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 17FE75E2; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu() Date: Mon, 29 May 2023 13:04:12 +0300 Message-Id: <20230529100425.6125-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There is no need to duplicate the code the enables TMU. Also update the comment to better explain why we do this in the first place. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 39ec7094fe17..0630b877136e 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -387,6 +387,16 @@ static int tb_enable_tmu(struct tb_switch *sw) { int ret; + /* + * If CL1 is enabled then we need to configure the TMU accuracy + * level to normal. Otherwise we keep the TMU running at the + * highest accuracy. + */ + if (tb_switch_is_clx_enabled(sw, TB_CL1)) + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + else + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + /* If it is already enabled in correct mode, don't touch it */ if (tb_switch_tmu_is_enabled(sw)) return 0; @@ -873,16 +883,6 @@ static void tb_scan_port(struct tb_port *port) tb_switch_clx_name(TB_CL1)); } - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - /* - * To support highest CLx state, we set router's TMU to - * Normal-Uni mode. - */ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - /* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -2035,16 +2035,6 @@ static void tb_restore_children(struct tb_switch *sw) tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - /* - * To support highest CLx state, we set router's TMU to - * Normal-Uni mode. - */ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - /* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to restore TMU configuration\n"); From patchwork Mon May 29 10:04:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8F15C7EE2E for ; Mon, 29 May 2023 10:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231953AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231910AbjE2KEh (ORCPT ); Mon, 29 May 2023 06:04:37 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E10B3A7 for ; Mon, 29 May 2023 03:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354675; x=1716890675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZGw1wDKauYZ3GmGFcp+GyjaqHNurd7o4dNvWNQaOh6w=; b=LYkkEQsOw9svmGFEEOQSj25p1aUbepUcINEhEHva5b5i5p1hjfpgF3p3 z2nCTCTvBa5/yeT0eQzgf/V4ikSYYqyXF4gX5Vc5rsQwkRvbgFXKCh6Jw XrSMtfcMTKEa3V/mMkwaKzQDFR6PVtJnuEAA/D4V+YH4VvMRN/huTQJf+ D+hoi4ej2HnPlzfRQbTr2HAAyr4J0PPGmbCjQTQP+sJVXEIXVfA8xTx+A E1ubxMTdbZNsWKkfFs97KSV5AiHSFba/RmOx0gWQBl/6nafbC8pAI2OLt 0EeQqRVJpG5d8xrOk+fa6sjy+6rY8GcpZEiylBIc2SHiPagOFcvPV/qVs A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684440" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684440" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518472" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518472" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 269D285F; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions Date: Mon, 29 May 2023 13:04:13 +0300 Message-Id: <20230529100425.6125-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This makes the code easier to follow. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 58 ++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 0630b877136e..41c353f462e7 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -272,6 +272,35 @@ static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel) device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy); } +static int tb_enable_tmu(struct tb_switch *sw) +{ + int ret; + + /* + * If CL1 is enabled then we need to configure the TMU accuracy + * level to normal. Otherwise we keep the TMU running at the + * highest accuracy. + */ + if (tb_switch_is_clx_enabled(sw, TB_CL1)) + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + else + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + + /* If it is already enabled in correct mode, don't touch it */ + if (tb_switch_tmu_is_enabled(sw)) + return 0; + + ret = tb_switch_tmu_disable(sw); + if (ret) + return ret; + + ret = tb_switch_tmu_post_time(sw); + if (ret) + return ret; + + return tb_switch_tmu_enable(sw); +} + static void tb_switch_discover_tunnels(struct tb_switch *sw, struct list_head *list, bool alloc_hopids) @@ -383,35 +412,6 @@ static void tb_scan_xdomain(struct tb_port *port) } } -static int tb_enable_tmu(struct tb_switch *sw) -{ - int ret; - - /* - * If CL1 is enabled then we need to configure the TMU accuracy - * level to normal. Otherwise we keep the TMU running at the - * highest accuracy. - */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - - /* If it is already enabled in correct mode, don't touch it */ - if (tb_switch_tmu_is_enabled(sw)) - return 0; - - ret = tb_switch_tmu_disable(sw); - if (ret) - return ret; - - ret = tb_switch_tmu_post_time(sw); - if (ret) - return ret; - - return tb_switch_tmu_enable(sw); -} - /** * tb_find_unused_port() - return the first inactive port on @sw * @sw: Switch to find the port on From patchwork Mon May 29 10:04:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F4BFC7EE2E for ; Mon, 29 May 2023 10:04:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231946AbjE2KEq (ORCPT ); Mon, 29 May 2023 06:04:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231939AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9B99B5 for ; Mon, 29 May 2023 03:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354676; x=1716890676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RnZffdERIkXZDQaKdSVAUpVg1t+EAGalLtwqytNQjpQ=; b=K9x7wpfMFvNRRr89xFAaUKOpwtV4GBxn8U2Yc3HDuCxWnF8rDhb1zKuu hVAmKV3IqaQ4VnMmK+TFDl6H2CxiUCFKvlO9VIxHswIlhJpV+FOcDBDtS BQYc4nYcrLpOr+gd+MaM942wt2hlaOfiKu3Hw4NR5YB4XIo+HlPsAW3sh oc0YgI7Vi6X6vO+4s9DzK1p5AaX8dCE3l1RpBhr0ojJwEGtjLPufazsgS bmezyfsV+DTFiwFj+oNBQrdWeTPbBZ+iMPoPo1hwG3iJOcF7CNr+W/q/1 /1qrSWo3E4K5axhnlHyFM2n/Goj8RyuIofhvqUC1QWfYrDwhhJGBKSeFY A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684443" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684443" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518466" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518466" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3B3DA916; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c Date: Mon, 29 May 2023 13:04:15 +0300 Message-Id: <20230529100425.6125-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There really don't belong to switch.c so move them into their own file. As we do this rename the functions to match the conventions used elsewhere in the driver. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/Makefile | 2 +- drivers/thunderbolt/clx.c | 362 ++++++++++++++++++++++++++++++++++ drivers/thunderbolt/debugfs.c | 2 +- drivers/thunderbolt/switch.c | 362 +--------------------------------- drivers/thunderbolt/tb.c | 8 +- drivers/thunderbolt/tb.h | 17 +- drivers/thunderbolt/tmu.c | 6 +- 7 files changed, 381 insertions(+), 378 deletions(-) create mode 100644 drivers/thunderbolt/clx.c diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index 78fd365893c1..c8b3d7b78098 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -2,7 +2,7 @@ obj-${CONFIG_USB4} := thunderbolt.o thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o -thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o +thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o thunderbolt-${CONFIG_ACPI} += acpi.o thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c new file mode 100644 index 000000000000..d5b46a8e57c9 --- /dev/null +++ b/drivers/thunderbolt/clx.c @@ -0,0 +1,362 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * CLx support + * + * Copyright (C) 2020 - 2023, Intel Corporation + * Authors: Gil Fine + * Mika Westerberg + */ + +#include + +#include "tb.h" + +static bool clx_enabled = true; +module_param_named(clx, clx_enabled, bool, 0444); +MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); + +static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary) +{ + u32 phy; + int ret; + + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (secondary) + phy |= LANE_ADP_CS_1_PMS; + else + phy &= ~LANE_ADP_CS_1_PMS; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_pm_secondary_enable(struct tb_port *port) +{ + return tb_port_pm_secondary_set(port, true); +} + +static int tb_port_pm_secondary_disable(struct tb_port *port) +{ + return tb_port_pm_secondary_set(port, false); +} + +/* Called for USB4 or Titan Ridge routers only */ +static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) +{ + u32 val, mask = 0; + bool ret; + + /* Don't enable CLx in case of two single-lane links */ + if (!port->bonded && port->dual_link_port) + return false; + + /* Don't enable CLx in case of inter-domain link */ + if (port->xdomain) + return false; + + if (tb_switch_is_usb4(port->sw)) { + if (!usb4_port_clx_supported(port)) + return false; + } else if (!tb_lc_is_clx_supported(port)) { + return false; + } + + if (clx_mask & TB_CL1) { + /* CL0s and CL1 are enabled and supported together */ + mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; + } + if (clx_mask & TB_CL2) + mask |= LANE_ADP_CS_0_CL2_SUPPORT; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_0, 1); + if (ret) + return false; + + return !!(val & mask); +} + +static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) +{ + u32 phy, mask; + int ret; + + /* CL0s and CL1 are enabled and supported together */ + if (clx == TB_CL1) + mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + else + /* For now we support only CL0s and CL1. Not CL2 */ + return -EOPNOTSUPP; + + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (enable) + phy |= mask; + else + phy &= ~mask; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) +{ + return tb_port_clx_set(port, clx, false); +} + +static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) +{ + return tb_port_clx_set(port, clx, true); +} + +/** + * tb_port_clx_is_enabled() - Is given CL state enabled + * @port: USB4 port to check + * @clx_mask: Mask of CL states to check + * + * Returns true if any of the given CL states is enabled for @port. + */ +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask) +{ + u32 val, mask = 0; + int ret; + + if (!tb_port_clx_supported(port, clx_mask)) + return false; + + if (clx_mask & TB_CL1) + mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + if (clx_mask & TB_CL2) + mask |= LANE_ADP_CS_1_CL2_ENABLE; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return false; + + return !!(val & mask); +} + +static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) +{ + struct tb_port *up, *down; + int ret; + + if (!tb_route(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + ret = tb_port_pm_secondary_enable(up); + if (ret) + return ret; + + return tb_port_pm_secondary_disable(down); +} + +static int tb_switch_mask_clx_objections(struct tb_switch *sw) +{ + int up_port = sw->config.upstream_port_number; + u32 offset, val[2], mask_obj, unmask_obj; + int ret, i; + + /* Only Titan Ridge of pre-USB4 devices support CLx states */ + if (!tb_switch_is_titan_ridge(sw)) + return 0; + + if (!tb_route(sw)) + return 0; + + /* + * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: + * Port A consists of lane adapters 1,2 and + * Port B consists of lane adapters 3,4 + * If upstream port is A, (lanes are 1,2), we mask objections from + * port B (lanes 3,4) and unmask objections from Port A and vice-versa. + */ + if (up_port == 1) { + mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + offset = TB_LOW_PWR_C1_CL1; + } else { + mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + offset = TB_LOW_PWR_C3_CL1; + } + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); + if (ret) + return ret; + + for (i = 0; i < ARRAY_SIZE(val); i++) { + val[i] |= mask_obj; + val[i] &= ~unmask_obj; + } + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); +} + +static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + bool up_clx_support, down_clx_support; + struct tb_port *up, *down; + int ret; + + if (!tb_switch_clx_is_supported(sw)) + return 0; + + /* + * Enable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Enable CLx only for first hop router (depth = 1) */ + if (tb_route(tb_switch_parent(sw))) + return 0; + + ret = tb_switch_pm_secondary_resolve(sw); + if (ret) + return ret; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + + up_clx_support = tb_port_clx_supported(up, clx); + down_clx_support = tb_port_clx_supported(down, clx); + + tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), + up_clx_support ? "" : "not "); + tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), + down_clx_support ? "" : "not "); + + if (!up_clx_support || !down_clx_support) + return -EOPNOTSUPP; + + ret = tb_port_clx_enable(up, clx); + if (ret) + return ret; + + ret = tb_port_clx_enable(down, clx); + if (ret) { + tb_port_clx_disable(up, clx); + return ret; + } + + ret = tb_switch_mask_clx_objections(sw); + if (ret) { + tb_port_clx_disable(up, clx); + tb_port_clx_disable(down, clx); + return ret; + } + + sw->clx = clx; + + tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); + return 0; +} + +/** + * tb_switch_clx_enable() - Enable CLx on upstream port of specified router + * @sw: Router to enable CLx for + * @clx: The CLx state to enable + * + * Enable CLx state only for first hop router. That is the most common + * use-case, that is intended for better thermal management, and so helps + * to improve performance. CLx is enabled only if both sides of the link + * support CLx, and if both sides of the link are not configured as two + * single lane links and only if the link is not inter-domain link. The + * complete set of conditions is described in CM Guide 1.0 section 8.1. + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) +{ + struct tb_switch *root_sw = sw->tb->root_switch; + + if (!clx_enabled) + return 0; + + /* + * CLx is not enabled and validated on Intel USB4 platforms before + * Alder Lake. + */ + if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) + return 0; + + switch (clx) { + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_enable_clx(sw, clx); + + default: + return -EOPNOTSUPP; + } +} + +static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + struct tb_port *up, *down; + int ret; + + if (!tb_switch_clx_is_supported(sw)) + return 0; + + /* + * Disable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Disable CLx only for first hop router (depth = 1) */ + if (tb_route(tb_switch_parent(sw))) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + ret = tb_port_clx_disable(up, clx); + if (ret) + return ret; + + ret = tb_port_clx_disable(down, clx); + if (ret) + return ret; + + sw->clx = TB_CLX_DISABLE; + + tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); + return 0; +} + +/** + * tb_switch_cls_disable() - Disable CLx on upstream port of specified router + * @sw: Router to disable CLx for + * @clx: The CLx state to disable + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) +{ + if (!clx_enabled) + return 0; + + switch (clx) { + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_disable_clx(sw, clx); + + default: + return -EOPNOTSUPP; + } +} diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index f92ad71ef983..e376ad25bf60 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -570,7 +570,7 @@ static int margining_run_write(void *data, u64 val) * CL states may interfere with lane margining so inform the user know * and bail out. */ - if (tb_port_is_clx_enabled(port, TB_CL1 | TB_CL2)) { + if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) { tb_port_warn(port, "CL states are enabled, Disable them with clx=0 and re-connect\n"); ret = -EINVAL; diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 4f3d02c58c9e..984b5536e143 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -26,10 +26,6 @@ struct nvm_auth_status { u32 status; }; -static bool clx_enabled = true; -module_param_named(clx, clx_enabled, bool, 0444); -MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); - /* * Hold NVM authentication failure status per switch This information * needs to stay around even when the switch gets power cycled so we @@ -1183,135 +1179,6 @@ int tb_port_update_credits(struct tb_port *port) return tb_port_do_update_credits(port->dual_link_port); } -static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary) -{ - u32 phy; - int ret; - - ret = tb_port_read(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return ret; - - if (secondary) - phy |= LANE_ADP_CS_1_PMS; - else - phy &= ~LANE_ADP_CS_1_PMS; - - return tb_port_write(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); -} - -static int tb_port_pm_secondary_enable(struct tb_port *port) -{ - return __tb_port_pm_secondary_set(port, true); -} - -static int tb_port_pm_secondary_disable(struct tb_port *port) -{ - return __tb_port_pm_secondary_set(port, false); -} - -/* Called for USB4 or Titan Ridge routers only */ -static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) -{ - u32 val, mask = 0; - bool ret; - - /* Don't enable CLx in case of two single-lane links */ - if (!port->bonded && port->dual_link_port) - return false; - - /* Don't enable CLx in case of inter-domain link */ - if (port->xdomain) - return false; - - if (tb_switch_is_usb4(port->sw)) { - if (!usb4_port_clx_supported(port)) - return false; - } else if (!tb_lc_is_clx_supported(port)) { - return false; - } - - if (clx_mask & TB_CL1) { - /* CL0s and CL1 are enabled and supported together */ - mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; - } - if (clx_mask & TB_CL2) - mask |= LANE_ADP_CS_0_CL2_SUPPORT; - - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_0, 1); - if (ret) - return false; - - return !!(val & mask); -} - -static int __tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) -{ - u32 phy, mask; - int ret; - - /* CL0s and CL1 are enabled and supported together */ - if (clx == TB_CL1) - mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - else - /* For now we support only CL0s and CL1. Not CL2 */ - return -EOPNOTSUPP; - - ret = tb_port_read(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return ret; - - if (enable) - phy |= mask; - else - phy &= ~mask; - - return tb_port_write(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); -} - -static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) -{ - return __tb_port_clx_set(port, clx, false); -} - -static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) -{ - return __tb_port_clx_set(port, clx, true); -} - -/** - * tb_port_is_clx_enabled() - Is given CL state enabled - * @port: USB4 port to check - * @clx_mask: Mask of CL states to check - * - * Returns true if any of the given CL states is enabled for @port. - */ -bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx_mask) -{ - u32 val, mask = 0; - int ret; - - if (!tb_port_clx_supported(port, clx_mask)) - return false; - - if (clx_mask & TB_CL1) - mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - if (clx_mask & TB_CL2) - mask |= LANE_ADP_CS_1_CL2_ENABLE; - - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return false; - - return !!(val & mask); -} - static int tb_port_start_lane_initialization(struct tb_port *port) { int ret; @@ -3246,8 +3113,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) * done for USB4 device too as CLx is re-enabled at resume. * CL0s and CL1 are enabled and supported together. */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) { - if (tb_switch_disable_clx(sw, TB_CL1)) + if (tb_switch_clx_is_enabled(sw, TB_CL1)) { + if (tb_switch_clx_disable(sw, TB_CL1)) tb_sw_warn(sw, "failed to disable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); } @@ -3472,231 +3339,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, return NULL; } -static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) -{ - struct tb_port *up, *down; - int ret; - - if (!tb_route(sw)) - return 0; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - ret = tb_port_pm_secondary_enable(up); - if (ret) - return ret; - - return tb_port_pm_secondary_disable(down); -} - -static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - bool up_clx_support, down_clx_support; - struct tb_port *up, *down; - int ret; - - if (!tb_switch_is_clx_supported(sw)) - return 0; - - /* - * Enable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) - return 0; - - /* Enable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - - ret = tb_switch_pm_secondary_resolve(sw); - if (ret) - return ret; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - - up_clx_support = tb_port_clx_supported(up, clx); - down_clx_support = tb_port_clx_supported(down, clx); - - tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), - up_clx_support ? "" : "not "); - tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), - down_clx_support ? "" : "not "); - - if (!up_clx_support || !down_clx_support) - return -EOPNOTSUPP; - - ret = tb_port_clx_enable(up, clx); - if (ret) - return ret; - - ret = tb_port_clx_enable(down, clx); - if (ret) { - tb_port_clx_disable(up, clx); - return ret; - } - - ret = tb_switch_mask_clx_objections(sw); - if (ret) { - tb_port_clx_disable(up, clx); - tb_port_clx_disable(down, clx); - return ret; - } - - sw->clx = clx; - - tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); - return 0; -} - -/** - * tb_switch_enable_clx() - Enable CLx on upstream port of specified router - * @sw: Router to enable CLx for - * @clx: The CLx state to enable - * - * Enable CLx state only for first hop router. That is the most common - * use-case, that is intended for better thermal management, and so helps - * to improve performance. CLx is enabled only if both sides of the link - * support CLx, and if both sides of the link are not configured as two - * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is described in CM Guide 1.0 section 8.1. - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - struct tb_switch *root_sw = sw->tb->root_switch; - - if (!clx_enabled) - return 0; - - /* - * CLx is not enabled and validated on Intel USB4 platforms before - * Alder Lake. - */ - if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_enable_clx(sw, clx); - - default: - return -EOPNOTSUPP; - } -} - -static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - struct tb_port *up, *down; - int ret; - - if (!tb_switch_is_clx_supported(sw)) - return 0; - - /* - * Disable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) - return 0; - - /* Disable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - ret = tb_port_clx_disable(up, clx); - if (ret) - return ret; - - ret = tb_port_clx_disable(down, clx); - if (ret) - return ret; - - sw->clx = TB_CLX_DISABLE; - - tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); - return 0; -} - -/** - * tb_switch_disable_clx() - Disable CLx on upstream port of specified router - * @sw: Router to disable CLx for - * @clx: The CLx state to disable - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - if (!clx_enabled) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_disable_clx(sw, clx); - - default: - return -EOPNOTSUPP; - } -} - -/** - * tb_switch_mask_clx_objections() - Mask CLx objections for a router - * @sw: Router to mask objections for - * - * Mask the objections coming from the second depth routers in order to - * stop these objections from interfering with the CLx states of the first - * depth link. - */ -int tb_switch_mask_clx_objections(struct tb_switch *sw) -{ - int up_port = sw->config.upstream_port_number; - u32 offset, val[2], mask_obj, unmask_obj; - int ret, i; - - /* Only Titan Ridge of pre-USB4 devices support CLx states */ - if (!tb_switch_is_titan_ridge(sw)) - return 0; - - if (!tb_route(sw)) - return 0; - - /* - * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: - * Port A consists of lane adapters 1,2 and - * Port B consists of lane adapters 3,4 - * If upstream port is A, (lanes are 1,2), we mask objections from - * port B (lanes 3,4) and unmask objections from Port A and vice-versa. - */ - if (up_port == 1) { - mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; - unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; - offset = TB_LOW_PWR_C1_CL1; - } else { - mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; - unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; - offset = TB_LOW_PWR_C3_CL1; - } - - ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, - sw->cap_lp + offset, ARRAY_SIZE(val)); - if (ret) - return ret; - - for (i = 0; i < ARRAY_SIZE(val); i++) { - val[i] |= mask_obj; - val[i] &= ~unmask_obj; - } - - return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->cap_lp + offset, ARRAY_SIZE(val)); -} - /* * Can be used for read/write a specified PCIe bridge for any Thunderbolt 3 * device. For now used only for Titan Ridge. diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 91459bf2fd0f..c7cfd740520a 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -247,7 +247,7 @@ static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) sw = tb_to_switch(dev); if (sw) { tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, - tb_switch_is_clx_enabled(sw, TB_CL1)); + tb_switch_clx_is_enabled(sw, TB_CL1)); if (tb_switch_tmu_enable(sw)) tb_sw_warn(sw, "failed to increase TMU rate\n"); } @@ -281,7 +281,7 @@ static int tb_enable_tmu(struct tb_switch *sw) * level to normal. Otherwise we keep the TMU running at the * highest accuracy. */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) + if (tb_switch_clx_is_enabled(sw, TB_CL1)) ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); else ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); @@ -879,7 +879,7 @@ static void tb_scan_port(struct tb_port *port) if (discovery) { tb_sw_dbg(sw, "discovery, not touching CL states\n"); } else { - ret = tb_switch_enable_clx(sw, TB_CL1); + ret = tb_switch_clx_enable(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); @@ -2032,7 +2032,7 @@ static void tb_restore_children(struct tb_switch *sw) * CL0s and CL1 are enabled and supported together. * Silently ignore CLx re-enabling in case CLx is not supported. */ - ret = tb_switch_enable_clx(sw, TB_CL1); + ret = tb_switch_clx_enable(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 07e4e7b37f13..d29bc7eab051 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1002,6 +1002,8 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) sw->tmu.unidirectional == sw->tmu.unidirectional_request; } +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask); + static inline const char *tb_switch_clx_name(enum tb_clx clx) { switch (clx) { @@ -1013,28 +1015,28 @@ static inline const char *tb_switch_clx_name(enum tb_clx clx) } } -int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); -int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); +int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx); +int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx); /** - * tb_switch_is_clx_enabled() - Checks if the CLx is enabled + * tb_switch_clx_is_enabled() - Checks if the CLx is enabled * @sw: Router to check for the CLx * @clx: The CLx state to check for * * Checks if the specified CLx is enabled on the router upstream link. * Not applicable for a host router. */ -static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw, +static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw, enum tb_clx clx) { return sw->clx == clx; } /** - * tb_switch_is_clx_supported() - Is CLx supported on this type of router + * tb_switch_clx_is_supported() - Is CLx supported on this type of router * @sw: The router to check CLx support for */ -static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw) +static inline bool tb_switch_clx_is_supported(const struct tb_switch *sw) { if (sw->quirks & QUIRK_NO_CLX) return false; @@ -1042,8 +1044,6 @@ static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw) return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw); } -int tb_switch_mask_clx_objections(struct tb_switch *sw); - int tb_switch_pcie_l1_enable(struct tb_switch *sw); int tb_switch_xhci_connect(struct tb_switch *sw); @@ -1089,7 +1089,6 @@ void tb_port_lane_bonding_disable(struct tb_port *port); int tb_port_wait_for_link_width(struct tb_port *port, int width, int timeout_msec); int tb_port_update_credits(struct tb_port *port); -bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index be310d97ea7b..6988704c845c 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -388,7 +388,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw) * on these devices e.g. Alpine Ridge and earlier, the TMU mode * HiFi bi-directional is enabled by default and we don't change it. */ - if (!tb_switch_is_clx_supported(sw)) + if (!tb_switch_clx_is_supported(sw)) return 0; /* Already disabled? */ @@ -653,7 +653,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi * bi-directional is enabled by default. */ - if (!tb_switch_is_clx_supported(sw)) + if (!tb_switch_clx_is_supported(sw)) return 0; if (tb_switch_tmu_is_enabled(sw)) @@ -664,7 +664,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are * enabled and supported together. */ - if (!tb_switch_is_clx_enabled(sw, TB_CL1)) + if (!tb_switch_clx_is_enabled(sw, TB_CL1)) return -EOPNOTSUPP; ret = tb_switch_tmu_disable_objections(sw); From patchwork Mon May 29 10:04:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE932C77B7E for ; Mon, 29 May 2023 10:04:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231958AbjE2KEo (ORCPT ); Mon, 29 May 2023 06:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231936AbjE2KEi (ORCPT ); Mon, 29 May 2023 06:04:38 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8845BC7 for ; Mon, 29 May 2023 03:04:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354677; x=1716890677; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tdwqur24UfFHd+KSL34nx+mjjTyCGH6ENa9Hg34uMhs=; b=a6EnofdDRjinMMTsDOQke/Ez1W9YDMVvWy2ofLb67TVk41EFRxuq7JT+ bCnl4Y07XMknVzaPCr48OxmBl5xXhw9J6h0YguYN6aYeGNK/0YoDIEHRV QYJwF5CgAD7yoNB+CGNgS9Rr0QhsRienBM4I0gAGFz4KF7Lr1bJQThByy E8nokIVMA7xu/BR3Wxsc5pCDG8CjXwBq6Q0HJa3TsS+3TalFNNqvyYMFj /WohZPFlqoNbA+3Mx2qCg2JH6mG1M7KsmoIcM/Y2OWAtmJ4BKkucIXvJD i59NRwPAPnXkMyEowK4AI5gTzCT5T7tuosfHXGSs0m8tnfhF8tz18I/um w==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684450" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684450" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518494" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518494" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:25 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 592E6C51; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 14/20] thunderbolt: Check for first depth router in tb.c Date: Mon, 29 May 2023 13:04:19 +0300 Message-Id: <20230529100425.6125-15-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently tb_switch_clx_enable() enables CL states only for the first depth router. This is something we may want to change in the future and in addition it is not visible from the calling path at all. For this reason do the check in the tb.c so it is immediately visible that we only do this for the first depth router. Fix the kernel-docs accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 22 ++++++---------------- drivers/thunderbolt/tb.c | 10 ++++++++++ 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 4601607f1901..b8cfbd643311 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -257,14 +257,12 @@ static const char *clx_name(unsigned int clx) * @sw: Router to enable CLx for * @clx: The CLx state to enable * - * Enable CLx state only for first hop router. That is the most common - * use-case, that is intended for better thermal management, and so helps - * to improve performance. CLx is enabled only if both sides of the link - * support CLx, and if both sides of the link are not configured as two - * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is described in CM Guide 1.0 section 8.1. + * CLx is enabled only if both sides of the link support CLx, and if both sides + * of the link are not configured as two single lane links and only if the link + * is not inter-domain link. The complete set of conditions is described in CM + * Guide 1.0 section 8.1. * - * Return: Returns 0 on success or an error code on failure. + * Returns %0 on success or an error code on failure. */ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) { @@ -284,10 +282,6 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) !tb_switch_clx_is_supported(sw)) return 0; - /* Enable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - /* CL2 is not yet supported */ if (clx & TB_CL2) return -EOPNOTSUPP; @@ -340,7 +334,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) * Disables all CL states of the given router. Can be called on any * router and if the states were not enabled already does nothing. * - * Return: Returns 0 on success or an error code on failure. + * Returns %0 on success or an error code on failure. */ int tb_switch_clx_disable(struct tb_switch *sw) { @@ -351,10 +345,6 @@ int tb_switch_clx_disable(struct tb_switch *sw) if (!tb_switch_clx_is_supported(sw)) return 0; - /* Disable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - if (!clx) return 0; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 2d360508aeeb..1d056ff6d77f 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -244,6 +244,16 @@ static int tb_enable_clx(struct tb_switch *sw) { int ret; + /* + * Currently only enable CLx for the first link. This is enough + * to allow the CPU to save energy at least on Intel hardware + * and makes it slightly simpler to implement. We may change + * this in the future to cover the whole topology if it turns + * out to be beneficial. + */ + if (sw->config.depth != 1) + return 0; + /* * CL0s and CL1 are enabled and supported together. * Silently ignore CLx enabling in case CLx is not supported. From patchwork Mon May 29 10:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17C36C7EE32 for ; Mon, 29 May 2023 10:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232002AbjE2KEw (ORCPT ); Mon, 29 May 2023 06:04:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8AB4DB for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354680; x=1716890680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o6fVvWAIOSTWk0LSvbD6edTBjS2MdOYtX+ksmcZ51Gk=; b=HrkT98Z+TvNYxftQ3zDiM/iGuSKf2ZfbRj0wr3heLj/ukPKEN/ZRsOAG uliEUlobg03pdZCjvWqPvaLY0sFT3c0OInedKTEz9RV9/QxQYoafGTr69 zIgJUPFuDqpEFHU3tzteuKeEWbSffNeWFqQ0dhZxWv/bn9esvKA+CY9Am GPsSDXdF1rnEPmBcU07ful9DVMTJSTZt2tk/A4NyGGyaRqDHeihYvJlvn 3cO0mubhwL6lOw/m2D94pNUK0brVI4voM35/7KxpjYj8UL7YMe5FYiUF/ HiaSEpF6tfexRV3HhtIoBuI1aBhrvQTlDwanc18WCxb/2hu+bGdP4QyZl g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684480" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684480" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518650" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518650" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 6F9311040; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: " Date: Mon, 29 May 2023 13:04:22 +0300 Message-Id: <20230529100425.6125-18-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This makes it easier to spot from the logs and follows what we do with the TMU code already. We also log enabling/disabling CL states using the tb_sw_dbg() instead of tb_port_dbg(). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index b8cfbd643311..5e745386c413 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -296,9 +296,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) up_clx_support = tb_port_clx_supported(up, clx); down_clx_support = tb_port_clx_supported(down, clx); - tb_port_dbg(up, "%s %ssupported\n", clx_name(clx), + tb_port_dbg(up, "CLx: %s %ssupported\n", clx_name(clx), up_clx_support ? "" : "not "); - tb_port_dbg(down, "%s %ssupported\n", clx_name(clx), + tb_port_dbg(down, "CLx: %s %ssupported\n", clx_name(clx), down_clx_support ? "" : "not "); if (!up_clx_support || !down_clx_support) @@ -323,7 +323,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) sw->clx |= clx; - tb_port_dbg(up, "%s enabled\n", clx_name(clx)); + tb_sw_dbg(sw, "CLx: %s enabled\n", clx_name(clx)); return 0; } @@ -361,6 +361,6 @@ int tb_switch_clx_disable(struct tb_switch *sw) sw->clx = 0; - tb_port_dbg(up, "%s disabled\n", clx_name(clx)); + tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx)); return 0; } From patchwork Mon May 29 10:04:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D301C7EE33 for ; Mon, 29 May 2023 10:04:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231944AbjE2KEz (ORCPT ); Mon, 29 May 2023 06:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231955AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C81DC for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354680; x=1716890680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jg2dp+ihyvbHVduyz8Ezy3Sgn1CKDV3r6B3tpgCPtQU=; b=WFt1Duq1lnJg1bSQAlP4qnQok5Z1sWiXC0LHSJvtN3BQO9L5FEWNPSM0 eL3citpkZ1gepTdbiYLEyP+Tg2b5AvsqFngjmujop/1mB5TQsBcpLVzcZ rXFiSqq5Rcbux3mT4HF0SBKssz3Fi0Pd6i4oPYD03w7ntyxxlaBPNtYjz BVH75MMMhTcdePP6g8pwdH7u8PtJ+xR9auRiJ4RZla9tU1N+/GHtZs8/v T3U7YNbD70U0++H3vW7wFpjVUDFRaUvehfVXbywxUiDNwXezgpUqRtE9S bdPq3vT3NLBbjuClLroI8OIm2KtGYRgaHMakcSZEbleApmjwQZxC9PsgL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684483" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684483" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518665" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518665" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 7D1AD11A3; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 18/20] thunderbolt: Initialize CL states from the hardware Date: Mon, 29 May 2023 13:04:23 +0300 Message-Id: <20230529100425.6125-19-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In case the boot firmware enabled any of them, read the currently configured CL states and update the router structure accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 100 +++++++++++++++++++++++++---------- drivers/thunderbolt/switch.c | 4 ++ drivers/thunderbolt/tb.h | 1 + 3 files changed, 78 insertions(+), 27 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 5e745386c413..960409df4405 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -15,6 +15,21 @@ static bool clx_enabled = true; module_param_named(clx, clx_enabled, bool, 0444); MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); +static const char *clx_name(unsigned int clx) +{ + if (!clx) + return "disabled"; + + if (clx & TB_CL2) + return "CL0s/CL1/CL2"; + if (clx & TB_CL1) + return "CL0s/CL1"; + if (clx & TB_CL0S) + return "CL0s"; + + return "unknown"; +} + static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary) { u32 phy; @@ -117,6 +132,29 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx) return tb_port_clx_set(port, clx, true); } +static int tb_port_clx(struct tb_port *port) +{ + u32 val; + int ret; + + if (!tb_port_clx_supported(port, TB_CL0S | TB_CL1 | TB_CL2)) + return 0; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (val & LANE_ADP_CS_1_CL0S_ENABLE) + ret |= TB_CL0S; + if (val & LANE_ADP_CS_1_CL1_ENABLE) + ret |= TB_CL1; + if (val & LANE_ADP_CS_1_CL2_ENABLE) + ret |= TB_CL2; + + return ret; +} + /** * tb_port_clx_is_enabled() - Is given CL state enabled * @port: USB4 port to check @@ -126,25 +164,45 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx) */ bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx) { - u32 val, mask = 0; - int ret; + return !!(tb_port_clx(port) & clx); +} - if (!tb_port_clx_supported(port, clx)) - return false; +/** + * tb_switch_clx_init() - Initialize router CL states + * @sw: Router + * + * Can be called for any router. Initializes the current CL state by + * reading it from the hardware. + * + * Returns %0 in case of success and negative errno in case of failure. + */ +int tb_switch_clx_init(struct tb_switch *sw) +{ + struct tb_port *up, *down; + unsigned int clx, tmp; - if (clx & TB_CL0S) - mask |= LANE_ADP_CS_1_CL0S_ENABLE; - if (clx & TB_CL1) - mask |= LANE_ADP_CS_1_CL1_ENABLE; - if (clx & TB_CL2) - mask |= LANE_ADP_CS_1_CL2_ENABLE; + if (tb_switch_is_icm(sw)) + return 0; - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return false; + if (!tb_route(sw)) + return 0; - return !!(val & mask); + if (!tb_switch_clx_is_supported(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + + clx = tb_port_clx(up); + tmp = tb_port_clx(down); + if (clx != tmp) + tb_sw_warn(sw, "CLx: inconsistent configuration %#x != %#x\n", + clx, tmp); + + tb_sw_dbg(sw, "CLx: current mode: %s\n", clx_name(clx)); + + sw->clx = clx; + return 0; } static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) @@ -240,18 +298,6 @@ static bool validate_mask(unsigned int clx) return true; } -static const char *clx_name(unsigned int clx) -{ - if (clx & TB_CL2) - return "CL0s/CL1/CL2"; - if (clx & TB_CL1) - return "CL0s/CL1"; - if (clx & TB_CL0S) - return "CL0s"; - - return "unknown"; -} - /** * tb_switch_clx_enable() - Enable CLx on upstream port of specified router * @sw: Router to enable CLx for diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index f33a09d92c9b..0c11caec7e8e 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2859,6 +2859,10 @@ int tb_switch_add(struct tb_switch *sw) if (ret) return ret; + ret = tb_switch_clx_init(sw); + if (ret) + return ret; + ret = tb_switch_tmu_init(sw); if (ret) return ret; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 72e245639eb8..58df106aaa5e 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1002,6 +1002,7 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx); +int tb_switch_clx_init(struct tb_switch *sw); bool tb_switch_clx_is_supported(const struct tb_switch *sw); int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx); int tb_switch_clx_disable(struct tb_switch *sw); From patchwork Mon May 29 10:04:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 686837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06E23C7EE31 for ; Mon, 29 May 2023 10:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbjE2KEu (ORCPT ); Mon, 29 May 2023 06:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231945AbjE2KEl (ORCPT ); Mon, 29 May 2023 06:04:41 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08863CD for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354679; x=1716890679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XTKX5qCmJMJCmaW84u5HqRgEOZTjbzdZ+twNLR/kNKE=; b=MnM0xTVCAOgpBG6zUJi+MPEpoCSpUMhpCuc2iXa5BSLXF2UvUd4afFC6 5ztE5I7zhvzLW0kgjiIKOEgDf3qWsvcX67/MTUc2o+4E9pOYiQ17qKQbm tYOXereu86g+Xn/iL4tVYpV6DolpjTcscYuCLbq2dVJgtkzsDs5DKFn2+ 8FIpgQBwjTvL2YUGVdGd523PzuMskoUGgUSBbSSTMF43BqLXyaTv0iUbb 5mQNrIeqNuDtUf50MZyMfHq0Q9XVgX9fBk2+TbmohILczmN9omG5fV1XT zOoF+pUj2fkNa0TXQF6HOYF87J6Xx9oM/NBXBbF4B8lrrysx8BgKGZn08 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684474" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684474" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518636" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518636" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 82C487A6; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled Date: Mon, 29 May 2023 13:04:24 +0300 Message-Id: <20230529100425.6125-20-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This allows us to disable all CL states temporarily when running lane margining and then return back the previously enabled states. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 8 ++++++-- drivers/thunderbolt/debugfs.c | 35 ++++++++++++++++++++++++----------- 2 files changed, 30 insertions(+), 13 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 960409df4405..4f0cfbb24dd9 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -317,6 +317,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) struct tb_port *up, *down; int ret; + if (!clx) + return 0; + if (!validate_mask(clx)) return -EINVAL; @@ -380,7 +383,8 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) * Disables all CL states of the given router. Can be called on any * router and if the states were not enabled already does nothing. * - * Returns %0 on success or an error code on failure. + * Returns the CL states that were disabled or negative errno in case of + * failure. */ int tb_switch_clx_disable(struct tb_switch *sw) { @@ -408,5 +412,5 @@ int tb_switch_clx_disable(struct tb_switch *sw) sw->clx = 0; tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx)); - return 0; + return clx; } diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index e376ad25bf60..40b59e662ee3 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -553,8 +553,9 @@ static int margining_run_write(void *data, u64 val) struct usb4_port *usb4 = port->usb4; struct tb_switch *sw = port->sw; struct tb_margining *margining; + struct tb_switch *down_sw; struct tb *tb = sw->tb; - int ret; + int ret, clx; if (val != 1) return -EINVAL; @@ -566,15 +567,24 @@ static int margining_run_write(void *data, u64 val) goto out_rpm_put; } - /* - * CL states may interfere with lane margining so inform the user know - * and bail out. - */ - if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) { - tb_port_warn(port, - "CL states are enabled, Disable them with clx=0 and re-connect\n"); - ret = -EINVAL; - goto out_unlock; + if (tb_is_upstream_port(port)) + down_sw = sw; + else if (port->remote) + down_sw = port->remote->sw; + else + down_sw = NULL; + + if (down_sw) { + /* + * CL states may interfere with lane margining so + * disable them temporarily now. + */ + ret = tb_switch_clx_disable(down_sw); + if (ret < 0) { + tb_sw_warn(down_sw, "failed to disable CL states\n"); + goto out_unlock; + } + clx = ret; } margining = usb4->margining; @@ -586,7 +596,7 @@ static int margining_run_write(void *data, u64 val) margining->right_high, USB4_MARGIN_SW_COUNTER_CLEAR); if (ret) - goto out_unlock; + goto out_clx; ret = usb4_port_sw_margin_errors(port, &margining->results[0]); } else { @@ -600,6 +610,9 @@ static int margining_run_write(void *data, u64 val) margining->right_high, margining->results); } +out_clx: + if (down_sw) + tb_switch_clx_enable(down_sw, clx); out_unlock: mutex_unlock(&tb->lock); out_rpm_put: