From patchwork Tue Jan 19 00:40:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 366757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D675C433E6 for ; Tue, 19 Jan 2021 00:41:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D53CE23103 for ; Tue, 19 Jan 2021 00:41:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732515AbhASAlg (ORCPT ); Mon, 18 Jan 2021 19:41:36 -0500 Received: from mga09.intel.com ([134.134.136.24]:38531 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731808AbhASAld (ORCPT ); Mon, 18 Jan 2021 19:41:33 -0500 IronPort-SDR: QAUU+uIPj9acxIMXI8gETbpY6fzvxeTkOkqXTodPnW/flTvmronQNfc4Tg0WVP3nwWUGO6Qyv+ cFSzPQ7/zhjw== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011256" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011256" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:51 -0800 IronPort-SDR: nX5maNVPitem1NJf+z0tUH/5nnjTRizNj4IovAQAPPPxGCDx5sX/5brlOj+HZiJHfr4N5MLyJa 461GOkyP59XQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285761" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:50 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 2/8] taprio: Add support for frame preemption offload Date: Mon, 18 Jan 2021 16:40:22 -0800 Message-Id: <20210119004028.2809425-3-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Adds a way to configure which traffic classes are marked as preemptible and which are marked as express. Even if frame preemption is not a "real" offload, because it can't be executed purely in software, having this information near where the mapping of traffic classes to queues is specified, makes it, hopefully, easier to use. taprio will receive the information of which traffic classes are marked as express/preemptible, and when offloading frame preemption to the driver will convert the information, so the driver receives which queues are marked as express/preemptible. Signed-off-by: Vinicius Costa Gomes --- include/linux/netdevice.h | 1 + include/net/pkt_sched.h | 4 ++++ include/uapi/linux/pkt_sched.h | 1 + net/sched/sch_taprio.c | 41 ++++++++++++++++++++++++++++++---- 4 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 5b949076ed23..7388c20c07a8 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -858,6 +858,7 @@ enum tc_setup_type { TC_SETUP_QDISC_ETS, TC_SETUP_QDISC_TBF, TC_SETUP_QDISC_FIFO, + TC_SETUP_PREEMPT, }; /* These structures hold the attributes of bpf state that are being passed diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h index 15b1b30f454e..be5ff1535332 100644 --- a/include/net/pkt_sched.h +++ b/include/net/pkt_sched.h @@ -183,6 +183,10 @@ struct tc_taprio_qopt_offload { struct tc_taprio_sched_entry entries[]; }; +struct tc_preempt_qopt_offload { + u32 preemptible_queues; +}; + /* Reference counting */ struct tc_taprio_qopt_offload *taprio_offload_get(struct tc_taprio_qopt_offload *offload); diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 9e7c2c607845..9ca9d2e55557 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -1240,6 +1240,7 @@ enum { TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, /* s64 */ TCA_TAPRIO_ATTR_FLAGS, /* u32 */ TCA_TAPRIO_ATTR_TXTIME_DELAY, /* u32 */ + TCA_TAPRIO_ATTR_PREEMPT_TCS, /* u32 */ __TCA_TAPRIO_ATTR_MAX, }; diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 6f775275826a..e4b511a0ee38 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -64,6 +64,7 @@ struct taprio_sched { struct Qdisc **qdiscs; struct Qdisc *root; u32 flags; + u32 preemptible_tcs; enum tk_offsets tk_offset; int clockid; atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+ @@ -776,6 +777,7 @@ static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = { [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 }, [TCA_TAPRIO_ATTR_FLAGS] = { .type = NLA_U32 }, [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, + [TCA_TAPRIO_ATTR_PREEMPT_TCS] = { .type = NLA_U32 }, }; static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb, @@ -1268,6 +1270,7 @@ static int taprio_disable_offload(struct net_device *dev, struct netlink_ext_ack *extack) { const struct net_device_ops *ops = dev->netdev_ops; + struct tc_preempt_qopt_offload preempt = { }; struct tc_taprio_qopt_offload *offload; int err; @@ -1286,13 +1289,15 @@ static int taprio_disable_offload(struct net_device *dev, offload->enable = 0; err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload); - if (err < 0) { + if (err < 0) + NL_SET_ERR_MSG(extack, + "Device failed to disable offload"); + + err = ops->ndo_setup_tc(dev, TC_SETUP_PREEMPT, &preempt); + if (err < 0) NL_SET_ERR_MSG(extack, "Device failed to disable offload"); - goto out; - } -out: taprio_offload_free(offload); return err; @@ -1509,6 +1514,29 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, mqprio->prio_tc_map[i]); } + /* It's valid to enable frame preemption without any kind of + * offloading being enabled, so keep it separated. + */ + if (tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]) { + u32 preempt = nla_get_u32(tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]); + struct tc_preempt_qopt_offload qopt = { }; + + if (preempt == U32_MAX) { + NL_SET_ERR_MSG(extack, "At least one queue must be not be preemptible"); + err = -EINVAL; + goto free_sched; + } + + qopt.preemptible_queues = tc_map_to_queue_mask(dev, preempt); + + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_PREEMPT, + &qopt); + if (err) + goto free_sched; + + q->preemptible_tcs = preempt; + } + if (FULL_OFFLOAD_IS_ENABLED(q->flags)) err = taprio_enable_offload(dev, q, new_admin, extack); else @@ -1665,6 +1693,7 @@ static int taprio_init(struct Qdisc *sch, struct nlattr *opt, */ q->clockid = -1; q->flags = TAPRIO_FLAGS_INVALID; + q->preemptible_tcs = U32_MAX; spin_lock(&taprio_list_lock); list_add(&q->taprio_list, &taprio_list); @@ -1848,6 +1877,10 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb) if (q->flags && nla_put_u32(skb, TCA_TAPRIO_ATTR_FLAGS, q->flags)) goto options_error; + if (q->preemptible_tcs != U32_MAX && + nla_put_u32(skb, TCA_TAPRIO_ATTR_PREEMPT_TCS, q->preemptible_tcs)) + goto options_error; + if (q->txtime_delay && nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay)) goto options_error; From patchwork Tue Jan 19 00:40:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 366756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C74C433E0 for ; Tue, 19 Jan 2021 00:41:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80A3223101 for ; Tue, 19 Jan 2021 00:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391315AbhASAl6 (ORCPT ); Mon, 18 Jan 2021 19:41:58 -0500 Received: from mga09.intel.com ([134.134.136.24]:38525 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730132AbhASAlr (ORCPT ); Mon, 18 Jan 2021 19:41:47 -0500 IronPort-SDR: NjuAKNezWHSisiPqBloH/QVjVvJSy7pbzEqV3dBBMj34Za0DBnfsXGB4QFY7w7HCMNZ8TDkfmo jLzFWQprsXuQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011257" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011257" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 IronPort-SDR: S4iGKsAaOsLdWc2XRLltfKxhstNgKlMmycZLklh/yQrzoqvEcILovXMSZpFzANSLRnKJxuWOWn th2Ew85gOvgg== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285766" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:51 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 3/8] igc: Set the RX packet buffer size for TSN mode Date: Mon, 18 Jan 2021 16:40:23 -0800 Message-Id: <20210119004028.2809425-4-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In preparation for supporting frame preemption, when entering TSN mode set the receive packet buffer to 16KB for the Express MAC, 16KB for the Preemptible MAC and 2KB for the BMC, according to the datasheet section 7.1.3.2. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_defines.h | 2 ++ drivers/net/ethernet/intel/igc/igc_tsn.c | 14 ++++++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h index 32f5fd684139..0e78abfd99ee 100644 --- a/drivers/net/ethernet/intel/igc/igc_defines.h +++ b/drivers/net/ethernet/intel/igc/igc_defines.h @@ -351,6 +351,8 @@ #define IGC_RXPBS_CFG_TS_EN 0x80000000 /* Timestamp in Rx buffer */ #define IGC_TXPBSIZE_TSN 0x04145145 /* 5k bytes buffer for each queue */ +#define IGC_RXPBSIZE_TSN 0x00010090 /* 16KB for EXP + 16KB for BE + 2KB for BMC */ +#define IGC_RXPBSIZE_SIZE_MASK 0x0001FFFF #define IGC_DTXMXPKTSZ_TSN 0x19 /* 1600 bytes of max TX DMA packet size */ #define IGC_DTXMXPKTSZ_DEFAULT 0x98 /* 9728-byte Jumbo frames */ diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index 174103c4bea6..38451cf05ac6 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -24,7 +24,7 @@ static bool is_any_launchtime(struct igc_adapter *adapter) static int igc_tsn_disable_offload(struct igc_adapter *adapter) { struct igc_hw *hw = &adapter->hw; - u32 tqavctrl; + u32 tqavctrl, rxpbs; int i; if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED)) @@ -35,6 +35,11 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) wr32(IGC_TXPBS, I225_TXPBSIZE_DEFAULT); wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_DEFAULT); + rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; + rxpbs |= I225_RXPBSIZE_DEFAULT; + + wr32(IGC_RXPBS, rxpbs); + tqavctrl = rd32(IGC_TQAVCTRL); tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV); @@ -64,7 +69,7 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) { struct igc_hw *hw = &adapter->hw; u32 tqavctrl, baset_l, baset_h; - u32 sec, nsec, cycle; + u32 sec, nsec, cycle, rxpbs; ktime_t base_time, systim; int i; @@ -79,6 +84,11 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); tqavctrl = rd32(IGC_TQAVCTRL); + rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; + rxpbs |= IGC_RXPBSIZE_TSN; + + wr32(IGC_RXPBS, rxpbs); + tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; wr32(IGC_TQAVCTRL, tqavctrl); From patchwork Tue Jan 19 00:40:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 366755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFD32C433E0 for ; Tue, 19 Jan 2021 00:42:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4F232225C for ; Tue, 19 Jan 2021 00:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391371AbhASAmI (ORCPT ); Mon, 18 Jan 2021 19:42:08 -0500 Received: from mga09.intel.com ([134.134.136.24]:38531 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391079AbhASAlt (ORCPT ); Mon, 18 Jan 2021 19:41:49 -0500 IronPort-SDR: Slg/xgOLDzXHo7kZymF1wobJKSZixeJdJb//HLzxcyvnUtmlfRwkTez3GEw6YWdZ/Ef2WcFqML uTXCC+JkSuoA== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011264" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011264" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 IronPort-SDR: QZSJrKENayqnnMspYJoz4fxLDekgzyytlj8vot2EsqqKPDZa3urdDUgiSGo/37+qiRQyYvGlUo LNLblCTgfMKQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285778" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 5/8] igc: Avoid TX Hangs because long cycles Date: Mon, 18 Jan 2021 16:40:25 -0800 Message-Id: <20210119004028.2809425-6-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Avoid possible TX Hangs caused by using long Qbv cycles. In some cases, using long cycles (more than 1 second) can cause transmissions to be blocked for that time. As the TX Hang timeout is close to 1 second, we may need to reduce the cycle time to something more reasonable: the value chosen is 1ms. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_main.c | 4 ++-- drivers/net/ethernet/intel/igc/igc_tsn.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index afd6a62da29d..f1b31fa04734 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4693,12 +4693,12 @@ static int igc_save_launchtime_params(struct igc_adapter *adapter, int queue, if (adapter->base_time) return 0; - adapter->cycle_time = NSEC_PER_SEC; + adapter->cycle_time = NSEC_PER_MSEC; for (i = 0; i < adapter->num_tx_queues; i++) { ring = adapter->tx_ring[i]; ring->start_time = 0; - ring->end_time = NSEC_PER_SEC; + ring->end_time = NSEC_PER_MSEC; } return 0; diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index 38451cf05ac6..f5a5527adb21 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -54,11 +54,11 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) wr32(IGC_TXQCTL(i), 0); wr32(IGC_STQT(i), 0); - wr32(IGC_ENDQT(i), NSEC_PER_SEC); + wr32(IGC_ENDQT(i), NSEC_PER_MSEC); } - wr32(IGC_QBVCYCLET_S, NSEC_PER_SEC); - wr32(IGC_QBVCYCLET, NSEC_PER_SEC); + wr32(IGC_QBVCYCLET_S, NSEC_PER_MSEC); + wr32(IGC_QBVCYCLET, NSEC_PER_MSEC); adapter->flags &= ~IGC_FLAG_TSN_QBV_ENABLED; From patchwork Tue Jan 19 00:40:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 366754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DD5AC433E0 for ; Tue, 19 Jan 2021 00:42:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FAB23101 for ; Tue, 19 Jan 2021 00:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391527AbhASAmg (ORCPT ); Mon, 18 Jan 2021 19:42:36 -0500 Received: from mga09.intel.com ([134.134.136.24]:38527 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391385AbhASAmL (ORCPT ); Mon, 18 Jan 2021 19:42:11 -0500 IronPort-SDR: 2CgKtVw4udaSCOaJ+tTiSlgWBr5ExDbFjp3hGZ5vsJkATgRgywnLyguXnookbu/Ad3sCHT4VtQ J9JOyCTDXRkQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011267" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011267" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:55 -0800 IronPort-SDR: Hn1Lm3Gh7gEqt8pRJz3t5DXZeFpfWxuDZ5yXY5+XRlx995Rlplt1vJFgdYw4F1v5Q6E9VGulkl TwQXmGpWGCIA== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285788" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 7/8] igc: Add support for Frame Preemption offload Date: Mon, 18 Jan 2021 16:40:27 -0800 Message-Id: <20210119004028.2809425-8-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org After the set of queues that are marked as preemptible are exposed to the driver we can configure the hardware to enable the frame preemption functionality. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_main.c | 32 +++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index f1b31fa04734..6a09f37ba7ed 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4818,6 +4818,23 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter, return 0; } +static int igc_save_frame_preemption(struct igc_adapter *adapter, + struct tc_preempt_qopt_offload *qopt) +{ + u32 preempt; + int i; + + preempt = qopt->preemptible_queues; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct igc_ring *ring = adapter->tx_ring[i]; + + ring->preemptible = preempt & BIT(i); + } + + return 0; +} + static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter, struct tc_taprio_qopt_offload *qopt) { @@ -4834,6 +4851,18 @@ static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter, return igc_tsn_offload_apply(adapter); } +static int igc_tsn_enable_frame_preemption(struct igc_adapter *adapter, + struct tc_preempt_qopt_offload *qopt) +{ + int err; + + err = igc_save_frame_preemption(adapter, qopt); + if (err) + return err; + + return igc_tsn_offload_apply(adapter); +} + static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { @@ -4846,6 +4875,9 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, case TC_SETUP_QDISC_ETF: return igc_tsn_enable_launchtime(adapter, type_data); + case TC_SETUP_PREEMPT: + return igc_tsn_enable_frame_preemption(adapter, type_data); + default: return -EOPNOTSUPP; }