From patchwork Mon May 9 20:16:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4439BC433F5 for ; Mon, 9 May 2022 20:20:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229521AbiEIUYp (ORCPT ); Mon, 9 May 2022 16:24:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229615AbiEIUYY (ORCPT ); Mon, 9 May 2022 16:24:24 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D972924DC08 for ; Mon, 9 May 2022 13:07:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126856; x=1683662856; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=8FcK3JT6tLgXermoJZpvyCzZDB/VYzv6Hi6+HVyZybI=; b=T+6j+MzolKopqpWsdFigWyvxWrv5BYGyRQCTCdQfs2RdWBJ6my3VyWiy 81WUggUiU9lFW9KFEHOb807Un+jmC1mxo2I1dUxXs4T7wekm+b9KSuIAj yJ+fRva0eG4xPXVlN1A7XVJ4puEGBG2PA32UDvPj95zN16BazctwrDTow Lonj0bcczQb4EMpGzIB10/GgkQfWFKIKG7GisRaHCHKAG8FnMJuEpm4iE +EMBC8oVqr6EyV2IoSEfkFCV5XC4X0Khj3Mc1J+S4U4O4+vf1+4z1syZg fT6zF6r8HHmdn0E51mWDpNnsAite9zUI4p0AUSxyMATMS5gvsHNJwFne3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692913" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692913" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688859" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:34 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 1/6] thunderbolt: Silently ignore CLx enabling in case CLx is not supported Date: Mon, 9 May 2022 23:16:51 +0300 Message-Id: <20220509201656.502-2-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org We can't enable CLx if it is not supported either by the host or device, or by the USB4/TBT link (e.g. when an optical cable is used). We silently ignore CLx enabling in this case. Signed-off-by: Gil Fine --- drivers/thunderbolt/tb.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 44d04b651a8b..7419cd1aefba 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -581,6 +581,7 @@ static void tb_scan_port(struct tb_port *port) struct tb_cm *tcm = tb_priv(port->sw->tb); struct tb_port *upstream_port; struct tb_switch *sw; + int ret; if (tb_is_upstream_port(port)) return; @@ -669,7 +670,9 @@ static void tb_scan_port(struct tb_port *port) tb_switch_lane_bonding_enable(sw); /* Set the link configured */ tb_switch_configure_link(sw); - if (tb_switch_enable_clx(sw, TB_CL0S)) + /* Silently ignore CLx enabling in case CLx is not supported */ + ret = tb_switch_enable_clx(sw, TB_CL0S); + if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to enable CLx on upstream port\n"); tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, @@ -1452,12 +1455,15 @@ static int tb_suspend_noirq(struct tb *tb) static void tb_restore_children(struct tb_switch *sw) { struct tb_port *port; + int ret; /* No need to restore if the router is already unplugged */ if (sw->is_unplugged) return; - if (tb_switch_enable_clx(sw, TB_CL0S)) + /* Silently ignore CLx re-enabling in case CLx is not supported */ + ret = tb_switch_enable_clx(sw, TB_CL0S); + if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to re-enable CLx on upstream port\n"); /* From patchwork Mon May 9 20:16:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF1E7C433EF for ; Mon, 9 May 2022 20:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229600AbiEIUZP (ORCPT ); Mon, 9 May 2022 16:25:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229683AbiEIUY2 (ORCPT ); Mon, 9 May 2022 16:24:28 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 165E924DC1B for ; Mon, 9 May 2022 13:07:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126859; x=1683662859; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=DqPAPw8qgQY+R2Ioy5u5gNT27Lof2P8GFRvoRMzp6kM=; b=hvh2jEv3FoePPvnbJ9ws9R44jB6KeBBTFnd5camOVDhHBMZF5NN03+UL 3fBVrbkwaHgL9y+1yFO1L5uQSjEo1F7GpggUd7Kun135XS2noxu2LrtDX IGLAo3AAwqORFrAtaUJ5WEnjQ8lUP0QSmifdoLvD9v5Cbb0WwCpqGM5me ms0besXBKY8ztjxuUdVEbyHXyT1f7qHntJWH4R94BQ01WwZpqLgLFAk/b W0p/BpUSXVedUaJqK5A3Ic0DfzjuNDq6qBc3mrGcYp2f2ftv1iYfTMeQq 4sEpPSGAwXDdAvDZ6CRmKkZAKu/y4+bqQ+L7A55kcU0LEf/zOwafuUMG3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692917" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692917" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688872" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:36 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 2/6] thunderbolt: CLx disable before system suspend only if previously enabled Date: Mon, 9 May 2022 23:16:52 +0300 Message-Id: <20220509201656.502-3-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Disable CLx before system suspended only if previously was enabled. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index ac87e8b50e52..549523c9a543 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3063,8 +3063,10 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) * Actually only needed for Titan Ridge but for simplicity can be * done for USB4 device too as CLx is re-enabled at resume. */ - if (tb_switch_disable_clx(sw, TB_CL0S)) - tb_sw_warn(sw, "failed to disable CLx on upstream port\n"); + if (tb_switch_is_clx_enabled(sw)) { + if (tb_switch_disable_clx(sw, TB_CL0S)) + tb_sw_warn(sw, "failed to disable CLx on upstream port\n"); + } err = tb_plug_events_active(sw, false); if (err) From patchwork Mon May 9 20:16:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDE4EC433FE for ; Mon, 9 May 2022 20:21:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229504AbiEIUZK (ORCPT ); Mon, 9 May 2022 16:25:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229690AbiEIUYa (ORCPT ); Mon, 9 May 2022 16:24:30 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C36924DC2D for ; Mon, 9 May 2022 13:07:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126861; x=1683662861; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=g34ni762V7ZNpKT1kr6emy3LlL/db6fdQrXnmpTZEYw=; b=L18ci3rMVvipkJZk6kVYb+9eJM8EsKIidyv3p+4+BAUjlPRReCrYR7NJ KvJEq+kirVJUhKoBQNae8hEk/e1TAFTKbdDJAL0oiXwgOi6ZfO191A5c5 Ynqvw2gV4FdprdMCA/t3QA8puZaQaTQ6ZLtGvzf17DOQYjimjrSU+UfBy 07mCWmR5xpKktqJhw6RhmQmXNATE9mdluV+j0Xg+DVOeqyi5Mq/A6BzTK n6dSX/E4kIkLslz7ZcR5CdEgiaHKZH8L9BNw9M9GbvMQ4HBvKPZtUdsMR ushpL/RJ47XHkd9fb8KmukEMTaSbcnqt37hlpXh0eYxB6rUiO6m6/yCls Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692930" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692930" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688882" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:39 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 3/6] thunderbolt: Fix typos in CLx enabling Date: Mon, 9 May 2022 23:16:53 +0300 Message-Id: <20220509201656.502-4-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Fix few typos in CLx enabling. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 2 +- drivers/thunderbolt/tmu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 549523c9a543..42b7daaf9c4d 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3485,7 +3485,7 @@ static int tb_switch_enable_cl0s(struct tb_switch *sw) * to improve performance. CLx is enabled only if both sides of the link * support CLx, and if both sides of the link are not configured as two * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is descibed in CM Guide 1.0 section 8.1. + * complete set of conditions is described in CM Guide 1.0 section 8.1. * * Return: Returns 0 on success or an error code on failure. */ diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index e4a07a26f693..b656659d02fb 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -606,7 +606,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) /** * tb_switch_tmu_configure() - Configure the TMU rate and directionality * @sw: Router whose mode to change - * @rate: Rate to configure Off/LowRes/HiFi + * @rate: Rate to configure Off/Normal/HiFi * @unidirectional: If uni-directional (bi-directional otherwise) * * Selects the rate of the TMU and directionality (uni-directional or From patchwork Mon May 9 20:16:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C7A7C433F5 for ; Mon, 9 May 2022 20:21:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbiEIUZS (ORCPT ); Mon, 9 May 2022 16:25:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbiEIUYf (ORCPT ); Mon, 9 May 2022 16:24:35 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59BD424DC44 for ; Mon, 9 May 2022 13:07:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126863; x=1683662863; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=BAFJ+DR2d6Bu7mqJI39cIz/ZIgjvFu/xGC4HeIIR05Q=; b=aBvhv7sJzsdI1s5Lw5pnmZcfpqvNJNP16R3k0FD2iUcydMhJJd9pmfNm ZBD0BunXsNPwTlOiq2rhbrc8fdA9/p+n8onUwAhW5z0yU5oKLWDQ8GC4K uifEgtDnq47PLE3NFLNfCQkGY1OlYdOf0WSRCugk8bH4pDldObNVcby0o qI78AKPwXutsRgqFPeUvR1YoAZjHumgRs/wVZo46EijOmKR/Tsorhzz1s aRShvdKL3nR9PP+BXK5Tug6OkeA2F3byoO7uOEVj14Tf7hTL9fkqsrvMK X0sOvY2nwopbb65MUK5Y6uYFUubFbiTKYRFOUjw7vDMGvUgg8c2mIl52u A==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692940" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692940" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688889" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:41 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 4/6] thunderbolt: Change downstream router's TMU rate in both TMU uni/bidir mode Date: Mon, 9 May 2022 23:16:54 +0300 Message-Id: <20220509201656.502-5-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In case of uni-directional time sync, TMU handshake is initiated by upstream router. In case of bi-directional time sync, TMU handshake is initiated by downstream router. In order to handle correctly the case of uni-directional mode, we avoid changing the upstream router's rate to off, because it might have another downstream router plugged that is set to uni-directional mode (and we don't want to change its mode). Instead, we always change downstream router's rate. Signed-off-by: Gil Fine --- drivers/thunderbolt/tmu.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index b656659d02fb..985ca43b8f39 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -359,13 +359,14 @@ int tb_switch_tmu_disable(struct tb_switch *sw) * In case of uni-directional time sync, TMU handshake is * initiated by upstream router. In case of bi-directional * time sync, TMU handshake is initiated by downstream router. - * Therefore, we change the rate to off in the respective - * router. + * We change downstream router's rate to off for both uni/bidir + * cases although it is needed only for the bi-directional mode. + * We avoid changing upstream router's mode since it might + * have another downstream router plugged, that is set to + * uni-directional mode and we don't want to change it's TMU + * mode. */ - if (unidirectional) - tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); - else - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); tb_port_tmu_time_sync_disable(up); ret = tb_port_tmu_time_sync_disable(down); From patchwork Mon May 9 20:16:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B59DC433F5 for ; Mon, 9 May 2022 20:21:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229517AbiEIUYt (ORCPT ); Mon, 9 May 2022 16:24:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229506AbiEIUYg (ORCPT ); Mon, 9 May 2022 16:24:36 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E517B24DC54 for ; Mon, 9 May 2022 13:07:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126865; x=1683662865; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=8sUGiurltVTq0HdRABZFj9yHLZqPktX1gG5oUpcVSz0=; b=Xuuj93uXs1KjU4gb+aY+hM4s0i6og8aNq5dNgj/Ufo8WxXokvqZsqvsO o5tLlnLInGE+TMhx5TuKzsmwJnykNirHRDjryy7fLz+A/Z1xpjKVY4czu gqPNBpDZ57xgQZs3Hgka0yuEPElIpqfo0nmriYO4rkIM/zpUVoreiFlLu xeoxE6sA2VfZKlL7wGoyg9UZZHlMld8jfvPfjNcml10+6rHrUSqGZYuH/ ZZnllzFJDGQbiXW3RAuPSkRpuwVg63MxVzdjKDQkA8K4N0Tba8CBXnWzH 5ONV0nDU3RW3WF4mhEHhWvZYHnNVz56QRdfIDJKSTKsMVxZ1doNU/A01X w==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692951" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692951" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688906" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:43 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 5/6] thunderbolt: Add CL1 support for USB4 and Titan Ridge routers Date: Mon, 9 May 2022 23:16:55 +0300 Message-Id: <20220509201656.502-6-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In this patch we add support for a second low power state of the link: CL1. Low power states (called collectively CLx) are used to reduce transmitter and receiver power when a high-speed lane is idle. We enable it, if both sides of the link support it, and only for the first hop router (i.e. the first device that connected to the host router). This is needed for better thermal management. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 89 ++++++++--------- drivers/thunderbolt/tb.c | 40 ++++++-- drivers/thunderbolt/tb.h | 46 ++++----- drivers/thunderbolt/tb_regs.h | 6 ++ drivers/thunderbolt/tmu.c | 177 ++++++++++++++++++++++++++++------ 5 files changed, 256 insertions(+), 102 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 42b7daaf9c4d..7ce2ac96da23 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3062,10 +3062,12 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) /* * Actually only needed for Titan Ridge but for simplicity can be * done for USB4 device too as CLx is re-enabled at resume. + * CL0s and CL1 are enabled and supported together. */ - if (tb_switch_is_clx_enabled(sw)) { - if (tb_switch_disable_clx(sw, TB_CL0S)) - tb_sw_warn(sw, "failed to disable CLx on upstream port\n"); + if (tb_switch_is_clx_enabled(sw, TB_CL1)) { + if (tb_switch_disable_clx(sw, TB_CL1)) + tb_sw_warn(sw, "failed to disable %s on upstream port\n", + tb_switch_clx_name(TB_CL1)); } err = tb_plug_events_active(sw, false); @@ -3357,13 +3359,12 @@ static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx) } switch (clx) { - case TB_CL0S: - /* CL0s support requires also CL1 support */ + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ mask = LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; break; - /* For now we support only CL0s. Not CL1, CL2 */ - case TB_CL1: + /* For now we support only CL0s and CL1. Not CL2 */ case TB_CL2: default: return false; @@ -3377,18 +3378,18 @@ static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx) return !!(val & mask); } -static inline bool tb_port_cl0s_supported(struct tb_port *port) -{ - return tb_port_clx_supported(port, TB_CL0S); -} - -static int __tb_port_cl0s_set(struct tb_port *port, bool enable) +static int __tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) { u32 phy, mask; int ret; - /* To enable CL0s also required to enable CL1 */ - mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + /* CL0s and CL1 are enabled and supported together */ + if (clx == TB_CL1) + mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + else + /* For now we support only CL0s and CL1. Not CL2 */ + return -EOPNOTSUPP; + ret = tb_port_read(port, &phy, TB_CFG_PORT, port->cap_phy + LANE_ADP_CS_1, 1); if (ret) @@ -3403,20 +3404,20 @@ static int __tb_port_cl0s_set(struct tb_port *port, bool enable) port->cap_phy + LANE_ADP_CS_1, 1); } -static int tb_port_cl0s_disable(struct tb_port *port) +static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) { - return __tb_port_cl0s_set(port, false); + return __tb_port_clx_set(port, clx, false); } -static int tb_port_cl0s_enable(struct tb_port *port) +static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) { - return __tb_port_cl0s_set(port, true); + return __tb_port_clx_set(port, clx, true); } -static int tb_switch_enable_cl0s(struct tb_switch *sw) +static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) { struct tb_switch *parent = tb_switch_parent(sw); - bool up_cl0s_support, down_cl0s_support; + bool up_clx_support, down_clx_support; struct tb_port *up, *down; int ret; @@ -3441,37 +3442,37 @@ static int tb_switch_enable_cl0s(struct tb_switch *sw) up = tb_upstream_port(sw); down = tb_port_at(tb_route(sw), parent); - up_cl0s_support = tb_port_cl0s_supported(up); - down_cl0s_support = tb_port_cl0s_supported(down); + up_clx_support = tb_port_clx_supported(up, clx); + down_clx_support = tb_port_clx_supported(down, clx); - tb_port_dbg(up, "CL0s %ssupported\n", - up_cl0s_support ? "" : "not "); - tb_port_dbg(down, "CL0s %ssupported\n", - down_cl0s_support ? "" : "not "); + tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), + up_clx_support ? "" : "not "); + tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), + down_clx_support ? "" : "not "); - if (!up_cl0s_support || !down_cl0s_support) + if (!up_clx_support || !down_clx_support) return -EOPNOTSUPP; - ret = tb_port_cl0s_enable(up); + ret = tb_port_clx_enable(up, clx); if (ret) return ret; - ret = tb_port_cl0s_enable(down); + ret = tb_port_clx_enable(down, clx); if (ret) { - tb_port_cl0s_disable(up); + tb_port_clx_disable(up, clx); return ret; } ret = tb_switch_mask_clx_objections(sw); if (ret) { - tb_port_cl0s_disable(up); - tb_port_cl0s_disable(down); + tb_port_clx_disable(up, clx); + tb_port_clx_disable(down, clx); return ret; } - sw->clx = TB_CL0S; + sw->clx = clx; - tb_port_dbg(up, "CL0s enabled\n"); + tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); return 0; } @@ -3504,15 +3505,16 @@ int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) return 0; switch (clx) { - case TB_CL0S: - return tb_switch_enable_cl0s(sw); + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_enable_clx(sw, clx); default: return -EOPNOTSUPP; } } -static int tb_switch_disable_cl0s(struct tb_switch *sw) +static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) { struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; @@ -3534,17 +3536,17 @@ static int tb_switch_disable_cl0s(struct tb_switch *sw) up = tb_upstream_port(sw); down = tb_port_at(tb_route(sw), parent); - ret = tb_port_cl0s_disable(up); + ret = tb_port_clx_disable(up, clx); if (ret) return ret; - ret = tb_port_cl0s_disable(down); + ret = tb_port_clx_disable(down, clx); if (ret) return ret; sw->clx = TB_CLX_DISABLE; - tb_port_dbg(up, "CL0s disabled\n"); + tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); return 0; } @@ -3561,8 +3563,9 @@ int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) return 0; switch (clx) { - case TB_CL0S: - return tb_switch_disable_cl0s(sw); + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_disable_clx(sw, clx); default: return -EOPNOTSUPP; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 7419cd1aefba..4f74789dd1be 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -221,7 +221,7 @@ static int tb_enable_tmu(struct tb_switch *sw) int ret; /* If it is already enabled in correct mode, don't touch it */ - if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request)) + if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request)) return 0; ret = tb_switch_tmu_disable(sw); @@ -670,13 +670,24 @@ static void tb_scan_port(struct tb_port *port) tb_switch_lane_bonding_enable(sw); /* Set the link configured */ tb_switch_configure_link(sw); - /* Silently ignore CLx enabling in case CLx is not supported */ - ret = tb_switch_enable_clx(sw, TB_CL0S); + /* + * CL0s and CL1 are enabled and supported together. + * Silently ignore CLx enabling in case CLx is not supported. + */ + ret = tb_switch_enable_clx(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) - tb_sw_warn(sw, "failed to enable CLx on upstream port\n"); + tb_sw_warn(sw, "failed to enable %s on upstream port\n", + tb_switch_clx_name(TB_CL1)); - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, - tb_switch_is_clx_enabled(sw)); + if (tb_switch_is_clx_enabled(sw, TB_CL1)) + /* + * To support highest CLx state, we set host router's TMU to + * Normal-Uni mode. + */ + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + else + /* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/ + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -1416,7 +1427,12 @@ static int tb_start(struct tb *tb) return ret; } - tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI, false); + /* + * To support highest CLx state, we set host router's TMU to + * Normal mode. + */ + tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_NORMAL, + false); /* Enable TMU if it is off */ tb_switch_tmu_enable(tb->root_switch); /* Full scan to discover devices added before the driver was loaded. */ @@ -1461,10 +1477,14 @@ static void tb_restore_children(struct tb_switch *sw) if (sw->is_unplugged) return; - /* Silently ignore CLx re-enabling in case CLx is not supported */ - ret = tb_switch_enable_clx(sw, TB_CL0S); + /* + * CL0s and CL1 are enabled and supported together. + * Silently ignore CLx re-enabling in case CLx is not supported. + */ + ret = tb_switch_enable_clx(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) - tb_sw_warn(sw, "failed to re-enable CLx on upstream port\n"); + tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", + tb_switch_clx_name(TB_CL1)); /* * tb_switch_tmu_configure() was already called when the switch was diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index ad025ff142ba..a16fffba9dd2 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -13,6 +13,7 @@ #include #include #include +#include #include "tb_regs.h" #include "ctl.h" @@ -111,7 +112,7 @@ struct tb_switch_tmu { enum tb_clx { TB_CLX_DISABLE, - TB_CL0S, + /* CL0s and CL1 are enabled and supported together */ TB_CL1, TB_CL2, }; @@ -934,45 +935,46 @@ void tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, bool unidirectional); /** - * tb_switch_tmu_hifi_is_enabled() - Checks if the specified TMU mode is enabled + * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check * @unidirectional: If uni-directional (bi-directional otherwise) * * Return true if hardware TMU configuration matches the one passed in - * as parameter. That is HiFi and either uni-directional or bi-directional. + * as parameter. That is HiFi/Normal and either uni-directional or bi-directional. */ -static inline bool tb_switch_tmu_hifi_is_enabled(const struct tb_switch *sw, - bool unidirectional) +static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw, + bool unidirectional) { - return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI && + return sw->tmu.rate == sw->tmu.rate_request && sw->tmu.unidirectional == unidirectional; } +static inline const char *tb_switch_clx_name(enum tb_clx clx) +{ + switch (clx) { + /* CL0s and CL1 are enabled and supported together */ + case TB_CL1: + return "CL0s/CL1"; + default: + return "Unknown"; + } +} + int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); /** * tb_switch_is_clx_enabled() - Checks if the CLx is enabled - * @sw: Router to check the CLx state for - * - * Checks if the CLx is enabled on the router upstream link. - * Not applicable for a host router. - */ -static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw) -{ - return sw->clx != TB_CLX_DISABLE; -} - -/** - * tb_switch_is_cl0s_enabled() - Checks if the CL0s is enabled - * @sw: Router to check for the CL0s + * @sw: Router to check for the CLx + * @clx: The CLx state to check for * - * Checks if the CL0s is enabled on the router upstream link. + * Checks if the specified CLx is enabled on the router upstream link. * Not applicable for a host router. */ -static inline bool tb_switch_is_cl0s_enabled(const struct tb_switch *sw) +static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw, + enum tb_clx clx) { - return sw->clx == TB_CL0S; + return sw->clx == clx; } /** diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index b301eeb0c89b..dd85aac5ced3 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -234,6 +234,7 @@ enum usb4_switch_op { /* Router TMU configuration */ #define TMU_RTR_CS_0 0x00 +#define TMU_RTR_CS_0_FREQ_WIND_MASK GENMASK(26, 16) #define TMU_RTR_CS_0_TD BIT(27) #define TMU_RTR_CS_0_UCAP BIT(30) #define TMU_RTR_CS_1 0x01 @@ -244,6 +245,11 @@ enum usb4_switch_op { #define TMU_RTR_CS_3_LOCAL_TIME_NS_MASK GENMASK(15, 0) #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_MASK GENMASK(31, 16) #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16 +#define TMU_RTR_CS_15 0xf +#define TMU_RTR_CS_15_FREQ_AVG_MASK GENMASK(5, 0) +#define TMU_RTR_CS_15_DELAY_AVG_MASK GENMASK(11, 6) +#define TMU_RTR_CS_15_OFFSET_AVG_MASK GENMASK(17, 12) +#define TMU_RTR_CS_15_ERROR_AVG_MASK GENMASK(23, 18) #define TMU_RTR_CS_22 0x16 #define TMU_RTR_CS_24 0x18 #define TMU_RTR_CS_25 0x19 diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 985ca43b8f39..e822ab90338b 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -11,6 +11,55 @@ #include "tb.h" +static int tb_switch_set_tmu_mode_params(struct tb_switch *sw, + enum tb_switch_tmu_rate rate) +{ + u32 freq_meas_wind[2] = { 30, 800 }; + u32 avg_const[2] = { 4, 8 }; + u32 freq, avg, val; + int ret; + + if (rate == TB_SWITCH_TMU_RATE_NORMAL) { + freq = freq_meas_wind[0]; + avg = avg_const[0]; + } else if (rate == TB_SWITCH_TMU_RATE_HIFI) { + freq = freq_meas_wind[1]; + avg = avg_const[1]; + } else { + return 0; + } + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_0, 1); + if (ret) + return ret; + + val &= ~TMU_RTR_CS_0_FREQ_WIND_MASK; + val |= FIELD_PREP(TMU_RTR_CS_0_FREQ_WIND_MASK, freq); + + ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_0, 1); + if (ret) + return ret; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_15, 1); + if (ret) + return ret; + + val &= ~TMU_RTR_CS_15_FREQ_AVG_MASK & + ~TMU_RTR_CS_15_DELAY_AVG_MASK & + ~TMU_RTR_CS_15_OFFSET_AVG_MASK & + ~TMU_RTR_CS_15_ERROR_AVG_MASK; + val |= FIELD_PREP(TMU_RTR_CS_15_FREQ_AVG_MASK, avg) | + FIELD_PREP(TMU_RTR_CS_15_DELAY_AVG_MASK, avg) | + FIELD_PREP(TMU_RTR_CS_15_OFFSET_AVG_MASK, avg) | + FIELD_PREP(TMU_RTR_CS_15_ERROR_AVG_MASK, avg); + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_15, 1); +} + static const char *tb_switch_tmu_mode_name(const struct tb_switch *sw) { bool root_switch = !tb_route(sw); @@ -348,7 +397,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw) if (tb_route(sw)) { - bool unidirectional = tb_switch_tmu_hifi_is_enabled(sw, true); + bool unidirectional = sw->tmu.unidirectional; struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *down, *up; int ret; @@ -412,6 +461,7 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) else tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + tb_switch_set_tmu_mode_params(sw, sw->tmu.rate); tb_port_tmu_unidirectional_disable(down); tb_port_tmu_unidirectional_disable(up); } @@ -493,7 +543,11 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) up = tb_upstream_port(sw); down = tb_port_at(tb_route(sw), parent); - ret = tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_HIFI); + ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request); + if (ret) + return ret; + + ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request); if (ret) return ret; @@ -520,7 +574,83 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) return ret; } -static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) +static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *down, *up; + + down = tb_port_at(tb_route(sw), parent); + up = tb_upstream_port(sw); + /* + * In case of any failure in one of the steps when change mode, + * get back to the TMU configurations in previous mode. + * In case of additional failures in the functions below, + * ignore them since the caller shall already report a failure. + */ + tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional); + if (sw->tmu.unidirectional_request) + tb_switch_tmu_rate_write(parent, sw->tmu.rate); + else + tb_switch_tmu_rate_write(sw, sw->tmu.rate); + + tb_switch_set_tmu_mode_params(sw, sw->tmu.rate); + tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional); +} + +static int __tb_switch_tmu_change_mode(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *up, *down; + int ret; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); + ret = tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional_request); + if (ret) + goto out; + + if (sw->tmu.unidirectional_request) + ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request); + else + ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request); + if (ret) + return ret; + + ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request); + if (ret) + return ret; + + ret = tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional_request); + if (ret) + goto out; + + ret = tb_port_tmu_time_sync_enable(down); + if (ret) + goto out; + + ret = tb_port_tmu_time_sync_enable(up); + if (ret) + goto out; + + return 0; + +out: + __tb_switch_tmu_change_mode_prev(sw); + return ret; +} + +/** + * tb_switch_tmu_enable() - Enable TMU on a router + * @sw: Router whose TMU to enable + * + * Enables TMU of a router to be in uni-directional Normal/HiFi + * or bi-directional HiFi mode. Calling tb_switch_tmu_configure() is required + * before calling this function, to select the mode Normal/HiFi and + * directionality (uni-directional/bi-directional). + * In HiFi mode all tunneling should work. In Normal mode, DP tunneling can't + * work. Uni-directional mode is required for CLx (Link Low-Power) to work. + */ +int tb_switch_tmu_enable(struct tb_switch *sw) { bool unidirectional = sw->tmu.unidirectional_request; int ret; @@ -536,12 +666,15 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) if (!tb_switch_is_clx_supported(sw)) return 0; - if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request)) + if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request)) return 0; if (tb_switch_is_titan_ridge(sw) && unidirectional) { - /* Titan Ridge supports only CL0s */ - if (!tb_switch_is_cl0s_enabled(sw)) + /* + * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are + * enabled and supported together. + */ + if (!tb_switch_is_clx_enabled(sw, TB_CL1)) return -EOPNOTSUPP; ret = tb_switch_tmu_objection_mask(sw); @@ -558,7 +691,11 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) return ret; if (tb_route(sw)) { - /* The used mode changes are from OFF to HiFi-Uni/HiFi-BiDir */ + /* + * The used mode changes are from OFF to + * HiFi-Uni/HiFi-BiDir/Normal-Uni or from Normal-Uni to + * HiFi-Uni. + */ if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) { if (unidirectional) ret = __tb_switch_tmu_enable_unidirectional(sw); @@ -566,6 +703,10 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) ret = __tb_switch_tmu_enable_bidirectional(sw); if (ret) return ret; + } else if (sw->tmu.rate == TB_SWITCH_TMU_RATE_NORMAL) { + ret = __tb_switch_tmu_change_mode(sw); + if (ret) + return ret; } sw->tmu.unidirectional = unidirectional; } else { @@ -575,35 +716,17 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) * of the child node - see above. * Here only the host router' rate configuration is written. */ - ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); + ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request); if (ret) return ret; } - sw->tmu.rate = TB_SWITCH_TMU_RATE_HIFI; + sw->tmu.rate = sw->tmu.rate_request; tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); return tb_switch_tmu_set_time_disruption(sw, false); } -/** - * tb_switch_tmu_enable() - Enable TMU on a router - * @sw: Router whose TMU to enable - * - * Enables TMU of a router to be in uni-directional or bi-directional HiFi mode. - * Calling tb_switch_tmu_configure() is required before calling this function, - * to select the mode HiFi and directionality (uni-directional/bi-directional). - * In both modes all tunneling should work. Uni-directional mode is required for - * CLx (Link Low-Power) to work. - */ -int tb_switch_tmu_enable(struct tb_switch *sw) -{ - if (sw->tmu.rate_request == TB_SWITCH_TMU_RATE_NORMAL) - return -EOPNOTSUPP; - - return tb_switch_tmu_hifi_enable(sw); -} - /** * tb_switch_tmu_configure() - Configure the TMU rate and directionality * @sw: Router whose mode to change From patchwork Mon May 9 20:16:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 571770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4921C433F5 for ; Mon, 9 May 2022 20:21:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229598AbiEIUYz (ORCPT ); Mon, 9 May 2022 16:24:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229715AbiEIUYg (ORCPT ); Mon, 9 May 2022 16:24:36 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B03AE24DC53 for ; Mon, 9 May 2022 13:07:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652126867; x=1683662867; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=tJ5iWQ6NMhVxpk1xivim5DEY7YBLb9RD8AZG6WZfYEA=; b=iLLN4C4IESGCSjqidtM5RdjU2cMOLWj2HXT1uHZh93OpGixGUPHJJAbb dbjD+YmL0bm3RwHdoNA/T3MifryboD1JbFdIRbKXpkrkqjfu+wMi/LB5x 3ooJQrFhjqBTZQ4Y/lJFJUrNFiXsw0dXGlyCzx89P5RLgefM1WhHTczVQ 8h2naE+FdWhIJzY0clz6oz8wM1LBJObhrbxFal7YlxyCxZfGGNM6O53eJ Ug+NUXMuhKqnszQbxQQwDM9wSxUgdfQOcb0Q8SdAGxGN7tYO2eoAfdw1q cX53aXmDN2ui/t0kQRgJ3pixbxlAKZAVpkMx5eiinG+HOHsGrkp+EKdgj w==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249692955" X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="249692955" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 13:07:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,212,1647327600"; d="scan'208";a="710688914" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 13:07:45 -0700 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH 6/6] thunderbolt: Change TMU mode to HiFi uni-directional once DisplayPort tunneled Date: Mon, 9 May 2022 23:16:56 +0300 Message-Id: <20220509201656.502-7-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220509201656.502-1-gil.fine@intel.com> References: <20220509201656.502-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Here we configure TMU mode to HiFi uni-directional once DP tunnel is created. This is due to accuracy requirement for DP tunneling as appears in CM guide 1.0, section 7.3.2. Due to Intel hardware limitation, once we changed the TMU mode to HiFi uni-directional (when DP tunnel exists), we don't change TMU mode back to normal uni-directional, even if DP tunnel is torn down later. Signed-off-by: Gil Fine --- drivers/thunderbolt/tb.c | 28 ++++++++++++++++++++++++++++ drivers/thunderbolt/tb.h | 5 +++++ drivers/thunderbolt/tmu.c | 14 ++++++++++++++ 3 files changed, 47 insertions(+) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 4f74789dd1be..bedabc407ab2 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -50,6 +50,8 @@ struct tb_hotplug_event { }; static void tb_handle_hotplug(struct work_struct *work); +static void tb_enable_tmu_1st_child(struct tb *tb, + enum tb_switch_tmu_rate rate); static void tb_queue_hotplug(struct tb *tb, u64 route, u8 port, bool unplug) { @@ -118,6 +120,13 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw, switch (port->config.type) { case TB_TYPE_DP_HDMI_IN: tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids); + /* + * In case of DP tunnel exists, change TMU mode to + * HiFi for CL0s to work. + */ + if (tunnel) + tb_enable_tmu_1st_child(tb, + TB_SWITCH_TMU_RATE_HIFI); break; case TB_TYPE_PCIE_DOWN: @@ -235,6 +244,19 @@ static int tb_enable_tmu(struct tb_switch *sw) return tb_switch_tmu_enable(sw); } +/* + * Once a DP tunnel exists in the domain, we set the TMU mode so that + * it meets the accuracy requirements and also enables CLx entry (CL0s). + * We set the TMU mode of the first depth router(s) for CL0s to work. + */ +static void tb_enable_tmu_1st_child(struct tb *tb, enum tb_switch_tmu_rate rate) +{ + struct tb_sw_tmu_config tmu = { .rate = rate }; + + device_for_each_child(&tb->root_switch->dev, &tmu, + tb_switch_tmu_config_enable); +} + /** * tb_find_unused_port() - return the first inactive port on @sw * @sw: Switch to find the port on @@ -985,6 +1007,12 @@ static void tb_tunnel_dp(struct tb *tb) list_add_tail(&tunnel->list, &tcm->tunnel_list); tb_reclaim_usb3_bandwidth(tb, in, out); + /* + * In case of DP tunnel exists, change TMU mode to + * HiFi for CL0s to work. + */ + tb_enable_tmu_1st_child(tb, TB_SWITCH_TMU_RATE_HIFI); + return; err_free: diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index a16fffba9dd2..3dbd9d919d5f 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -110,6 +110,10 @@ struct tb_switch_tmu { enum tb_switch_tmu_rate rate_request; }; +struct tb_sw_tmu_config { + enum tb_switch_tmu_rate rate; +}; + enum tb_clx { TB_CLX_DISABLE, /* CL0s and CL1 are enabled and supported together */ @@ -934,6 +938,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw); void tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, bool unidirectional); +int tb_switch_tmu_config_enable(struct device *dev, void *data); /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index e822ab90338b..b8ff9f64a71e 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -727,6 +727,20 @@ int tb_switch_tmu_enable(struct tb_switch *sw) return tb_switch_tmu_set_time_disruption(sw, false); } +int tb_switch_tmu_config_enable(struct device *dev, void *data) +{ + if (tb_is_switch(dev)) { + struct tb_switch *sw = tb_to_switch(dev); + struct tb_sw_tmu_config *tmu = data; + + tb_switch_tmu_configure(sw, tmu->rate, tb_switch_is_clx_enabled(sw, TB_CL1)); + if (tb_switch_tmu_enable(sw)) + tb_sw_dbg(sw, "Fail switching TMU to HiFi for 1st depth router\n"); + } + + return 0; +} + /** * tb_switch_tmu_configure() - Configure the TMU rate and directionality * @sw: Router whose mode to change