From patchwork Tue Oct 13 16:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 317726 Delivered-To: patch@linaro.org Received: by 2002:a92:d603:0:0:0:0:0 with SMTP id w3csp5720754ilm; Tue, 13 Oct 2020 09:26:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6BEmdUSshL85WJ8o8AjMhlDlJn4t/T7EjvExkjXKnQQoynOHDh6qpLEkLm+bKlcKz3XA0 X-Received: by 2002:a17:906:745:: with SMTP id z5mr592515ejb.408.1602606384009; Tue, 13 Oct 2020 09:26:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602606384; cv=none; d=google.com; s=arc-20160816; b=CTv4FtdbersybBOtuSEoeDW/nt36UVMS2mhFC7etQmkF56a/SqTq2tNjNXoYQAq0YU 8/qU/lPhEx6jZ8Adnc7iF4OMdhKKDIT6WzfRmnQu2FzQ2l6HkDqwSlTneLJd2e8WdYxf DmJqG7UhjmXXfwcUH4J5FOVPZViLGEhIBjFBbOda0qU8hPy8mfyXBOBZUNtxXH5YfFRJ FS7ivaBi9twfWuOlsHAN4Ov+x18WfHJIgP0b+r73+d7fA6EU54TkEvYn9Ryh5O9NnXpF 4bPKh2MYj6Gr+aZe6jscn398+00vEp6+q6uIHsVN4XXw7q9AaBkrvNU2LhmdV8wk39Ln nQ7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=+NUoJgzYQCPcT/5Fz2qq3INyRXIiy4MJhkveJUU/WCY=; b=YKlChIY8hxw6aRtt6zU+DpcQEIPlf/9NffqU5F+PXJ2esWiEoRv9/4Jqky3kNqop8m zik8bUDGLNhJH9pXb+kjwWE1Ba3/Dgf38WF2FQa09/Kt6ow9LE8whYUbZpHooIP3JE0X VnbSK1SaWwUNxVFuyHipUMFirrffhsgwrgd5ifb8an+qRW0N/0hG4H2cmdyyUu03V5ln jBA0wh0RNU0UkGfIuYTA6ysSN5VAItlkZ+WtFJJq9uEVp9MOMiQFxwZeasKNo0Y5gr5c G6zuBSgFhFemBArFs352Xrp6lGPETTEjh1GZHLN8Uv9nE+Eez8nnQQvibOxEzLm5Go3R jzsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id w7si156202ejf.3.2020.10.13.09.26.23; Tue, 13 Oct 2020 09:26:23 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0DD431DB24; Tue, 13 Oct 2020 18:26:06 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id C76AE1DA84; Tue, 13 Oct 2020 18:25:58 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C9A8106F; Tue, 13 Oct 2020 09:25:58 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 272273F66B; Tue, 13 Oct 2020 09:25:58 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, phil.yang@arm.com, thomas@monjalon.net, arybchenko@solarflare.com, ferruh.yigit@intel.com, konstantin.ananyev@intel.com, jerinj@marvell.com, tgw_team@tencent.com Cc: abhinandan.gujjar@intel.com, nd@arm.com, bruce.richardson@intel.com, john.mcnamara@intel.com, reshma.pattan@intel.com, stable@dpdk.org Date: Tue, 13 Oct 2020 11:25:37 -0500 Message-Id: <20201013162537.24029-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201013162537.24029-1-honnappa.nagarahalli@arm.com> References: <20201002000711.41511-1-honnappa.nagarahalli@arm.com> <20201013162537.24029-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 2/2] lib/ethdev: fix memory ordering for call back functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Call back functions are registered on the control plane. They are accessed from the data plane. Hence, correct memory orderings should be used to avoid race conditions. Fixes: 4dc294158cac ("ethdev: support optional Rx and Tx callbacks") Fixes: c8231c63ddcb ("ethdev: insert Rx callback as head of list") Cc: bruce.richardson@intel.com Cc: john.mcnamara@intel.com Cc: reshma.pattan@intel.com Cc: stable@dpdk.org Signed-off-by: Honnappa Nagarahalli Reviewed-by: Ola Liljedahl Reviewed-by: Phil Yang Acked-by: Konstantin Ananyev --- v2 - load the call back function pointer using __atomic_load_n lib/librte_ethdev/rte_ethdev.c | 28 ++++++++++++++++++----- lib/librte_ethdev/rte_ethdev.h | 42 ++++++++++++++++++++++++++-------- 2 files changed, 55 insertions(+), 15 deletions(-) -- 2.17.1 diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c index 59a41c07f..d89fcdc77 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -4486,12 +4486,20 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, rte_eth_devices[port_id].post_rx_burst_cbs[queue_id]; if (!tail) { - rte_eth_devices[port_id].post_rx_burst_cbs[queue_id] = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n( + &rte_eth_devices[port_id].post_rx_burst_cbs[queue_id], + cb, __ATOMIC_RELEASE); } else { while (tail->next) tail = tail->next; - tail->next = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); } rte_spinlock_unlock(&rte_eth_rx_cb_lock); @@ -4576,12 +4584,20 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id]; if (!tail) { - rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id] = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n( + &rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id], + cb, __ATOMIC_RELEASE); } else { while (tail->next) tail = tail->next; - tail->next = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); } rte_spinlock_unlock(&rte_eth_tx_cb_lock); @@ -4612,7 +4628,7 @@ rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, cb = *prev_cb; if (cb == user_cb) { /* Remove the user cb from the callback list. */ - *prev_cb = cb->next; + __atomic_store_n(prev_cb, cb->next, __ATOMIC_RELAXED); ret = 0; break; } @@ -4646,7 +4662,7 @@ rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, cb = *prev_cb; if (cb == user_cb) { /* Remove the user cb from the callback list. */ - *prev_cb = cb->next; + __atomic_store_n(prev_cb, cb->next, __ATOMIC_RELAXED); ret = 0; break; } diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 70295d7ab..6ad2a549b 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -3734,7 +3734,8 @@ struct rte_eth_rxtx_callback; * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3763,7 +3764,8 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3791,7 +3793,8 @@ rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3816,7 +3819,9 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, * on that queue. * * - After a short delay - where the delay is sufficient to allow any - * in-flight callbacks to complete. + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. * * @param port_id * The port identifier of the Ethernet device. @@ -3849,7 +3854,9 @@ int rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, * on that queue. * * - After a short delay - where the delay is sufficient to allow any - * in-flight callbacks to complete. + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. * * @param port_id * The port identifier of the Ethernet device. @@ -4510,10 +4517,18 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, rx_pkts, nb_pkts); #ifdef RTE_ETHDEV_RXTX_CALLBACKS - if (unlikely(dev->post_rx_burst_cbs[queue_id] != NULL)) { - struct rte_eth_rxtx_callback *cb = - dev->post_rx_burst_cbs[queue_id]; + struct rte_eth_rxtx_callback *cb; + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], + __ATOMIC_RELAXED); + + if (unlikely(cb != NULL)) { do { nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, nb_pkts, cb->param); @@ -4775,7 +4790,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #endif #ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb = dev->pre_tx_burst_cbs[queue_id]; + struct rte_eth_rxtx_callback *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id], + __ATOMIC_RELAXED); if (unlikely(cb != NULL)) { do {