From patchwork Tue Oct 1 18:28:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 174925 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp8492462oce; Tue, 1 Oct 2019 11:29:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqz1H//gZ9dAFTImzNoIfDEUAD3Gf/t4D2g03O01KInlqUW9xQ6Mj+S8VvNlvEiWmyyrhYip X-Received: by 2002:aa7:c555:: with SMTP id s21mr27258421edr.151.1569954563678; Tue, 01 Oct 2019 11:29:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569954563; cv=none; d=google.com; s=arc-20160816; b=UbqrSm6vov+lgwvidmClWFVrtyf5p9YIjBsOIRvz8P7KpVjuzvxuSRv1gS7wMKFmz7 qoB4RjN+m6mXZgbCYokOQ/Fs9UGq/bsbIIRaXXV5u5ESC0Gfi4R46EW3X4nxOs9TMZl3 C7rPTf8rH6tty8IZUSVwgjlh77pDyXkKiSeCQGutzIM2r2hhB69Lw5YhjYIBl2DNloBK w+jB0DK8h4QNjJWALooxRUTk34zARmznmn8xGCK3fj3V5V7PyD8hOCn1bgZC8HNuikWc vINW6nwPH5M0IRh2jIWD0Q7zy8QU9mDlG6D2REMe/rT7LmhHw5g+pqOf6YvtlJOdo/to Qcxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=eydQYhcVu6oJlnALwhahql68k7lDUFDZXVoeGhjTcog=; b=V+KWAuaKJXcFlILXg84WzVqkWO78285lvML3qtQX2Rx+0fbsjvfk0ushtX8VqkVejm R2sJZeT9B3EFLTuZuGe90jbAHt/6D4ScbRUK2SyFy2pvXaNse73+dKLVLqHohvDwmA90 RPyewKgZVEjfealR0uotAIN/mbBADCRWI4SqriygvPTgnAbRBytcjQTBkLEZfy8SdTDi Y0svaF3AVTNogJR/1XJLOmdgWjXpwJDGeDa/cZqopsxxWpe03ZxlqFcClDgFe2NKHvXV QRHKG1xbBesHzyCfU0l3oLthgp/DZSQ35P46gDyBOQs5US+CJ7QKzGV0h/FmxY3khUhN bWlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id j13si9474199ejb.98.2019.10.01.11.29.23; Tue, 01 Oct 2019 11:29:23 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 17C1BCFA6; Tue, 1 Oct 2019 20:29:16 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id E044191 for ; Tue, 1 Oct 2019 20:29:12 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C3F01570; Tue, 1 Oct 2019 11:29:12 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.34]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EA6403F918; Tue, 1 Oct 2019 11:29:11 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, vladimir.medvedkin@intel.com, olivier.matz@6wind.com Cc: dev@dpdk.org, konstantin.ananyev@intel.com, stephen@networkplumber.org, paulmck@linux.ibm.com, Gavin.Hu@arm.com, Honnappa.Nagarahalli@arm.com, Dharmik.Thakkar@arm.com, Ruifeng.Wang@arm.com, nd@arm.com, Ruifeng Wang Date: Tue, 1 Oct 2019 13:28:55 -0500 Message-Id: <20191001182857.43867-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191001182857.43867-1-honnappa.nagarahalli@arm.com> References: <20190906094534.36060-1-ruifeng.wang@arm.com> <20191001182857.43867-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 1/3] lib/lpm: integrate RCU QSBR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ruifeng Wang Currently, the tbl8 group is freed even though the readers might be using the tbl8 group entries. The freed tbl8 group can be reallocated quickly. This results in incorrect lookup results. RCU QSBR process is integrated for safe tbl8 group reclaim. Refer to RCU documentation to understand various aspects of integrating RCU library into other libraries. Signed-off-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/librte_lpm/Makefile | 3 +- lib/librte_lpm/meson.build | 2 + lib/librte_lpm/rte_lpm.c | 102 +++++++++++++++++++++++++---- lib/librte_lpm/rte_lpm.h | 21 ++++++ lib/librte_lpm/rte_lpm_version.map | 6 ++ 5 files changed, 122 insertions(+), 12 deletions(-) -- 2.17.1 diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index a7946a1c5..ca9e16312 100644 --- a/lib/librte_lpm/Makefile +++ b/lib/librte_lpm/Makefile @@ -6,9 +6,10 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_lpm.a +CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -LDLIBS += -lrte_eal -lrte_hash +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu EXPORT_MAP := rte_lpm_version.map diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build index a5176d8ae..19a35107f 100644 --- a/lib/librte_lpm/meson.build +++ b/lib/librte_lpm/meson.build @@ -2,9 +2,11 @@ # Copyright(c) 2017 Intel Corporation version = 2 +allow_experimental_apis = true sources = files('rte_lpm.c', 'rte_lpm6.c') headers = files('rte_lpm.h', 'rte_lpm6.h') # since header files have different names, we can install all vector headers # without worrying about which architecture we actually need headers += files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h') deps += ['hash'] +deps += ['rcu'] diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 3a929a1b1..ca58d4b35 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2019 Arm Limited */ #include @@ -381,6 +382,8 @@ rte_lpm_free_v1604(struct rte_lpm *lpm) rte_mcfg_tailq_write_unlock(); + if (lpm->dq) + rte_rcu_qsbr_dq_delete(lpm->dq); rte_free(lpm->tbl8); rte_free(lpm->rules_tbl); rte_free(lpm); @@ -390,6 +393,59 @@ BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604, 16.04); MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm), rte_lpm_free_v1604); +struct __rte_lpm_rcu_dq_entry { + uint32_t tbl8_group_index; + uint32_t pad; +}; + +static void +__lpm_rcu_qsbr_free_resource(void *p, void *data) +{ + struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; + struct __rte_lpm_rcu_dq_entry *e = + (struct __rte_lpm_rcu_dq_entry *)data; + struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p; + + /* Set tbl8 group invalid */ + __atomic_store(&tbl8[e->tbl8_group_index], &zero_tbl8_entry, + __ATOMIC_RELAXED); +} + +/* Associate QSBR variable with an LPM object. + */ +int +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr *v) +{ + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE]; + struct rte_rcu_qsbr_dq_parameters params; + + if ((lpm == NULL) || (v == NULL)) { + rte_errno = EINVAL; + return 1; + } + + if (lpm->dq) { + rte_errno = EEXIST; + return 1; + } + + /* Init QSBR defer queue. */ + snprintf(rcu_dq_name, sizeof(rcu_dq_name), "LPM_RCU_%s", lpm->name); + params.name = rcu_dq_name; + params.size = lpm->number_tbl8s; + params.esize = sizeof(struct __rte_lpm_rcu_dq_entry); + params.f = __lpm_rcu_qsbr_free_resource; + params.p = lpm->tbl8; + params.v = v; + lpm->dq = rte_rcu_qsbr_dq_create(¶ms); + if (lpm->dq == NULL) { + RTE_LOG(ERR, LPM, "LPM QS defer queue creation failed\n"); + return 1; + } + + return 0; +} + /* * Adds a rule to the rule table. * @@ -679,14 +735,15 @@ tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20 *tbl8) } static int32_t -tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s) +__tbl8_alloc_v1604(struct rte_lpm *lpm) { uint32_t group_idx; /* tbl8 group index. */ struct rte_lpm_tbl_entry *tbl8_entry; /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */ - for (group_idx = 0; group_idx < number_tbl8s; group_idx++) { - tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES]; + for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) { + tbl8_entry = &lpm->tbl8[group_idx * + RTE_LPM_TBL8_GROUP_NUM_ENTRIES]; /* If a free tbl8 group is found clean it and set as VALID. */ if (!tbl8_entry->valid_group) { struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -712,6 +769,21 @@ tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s) return -ENOSPC; } +static int32_t +tbl8_alloc_v1604(struct rte_lpm *lpm) +{ + int32_t group_idx; /* tbl8 group index. */ + + group_idx = __tbl8_alloc_v1604(lpm); + if ((group_idx < 0) && (lpm->dq != NULL)) { + /* If there are no tbl8 groups try to reclaim some. */ + if (rte_rcu_qsbr_dq_reclaim(lpm->dq) == 0) + group_idx = __tbl8_alloc_v1604(lpm); + } + + return group_idx; +} + static void tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start) { @@ -728,13 +800,21 @@ tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start) } static void -tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start) +tbl8_free_v1604(struct rte_lpm *lpm, uint32_t tbl8_group_start) { - /* Set tbl8 group invalid*/ struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; + struct __rte_lpm_rcu_dq_entry e; - __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry, - __ATOMIC_RELAXED); + if (lpm->dq != NULL) { + e.tbl8_group_index = tbl8_group_start; + e.pad = 0; + /* Push into QSBR defer queue. */ + rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&e); + } else { + /* Set tbl8 group invalid*/ + __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry, + __ATOMIC_RELAXED); + } } static __rte_noinline int32_t @@ -1037,7 +1117,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, if (!lpm->tbl24[tbl24_index].valid) { /* Search for a free tbl8 group. */ - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s); + tbl8_group_index = tbl8_alloc_v1604(lpm); /* Check tbl8 allocation was successful. */ if (tbl8_group_index < 0) { @@ -1083,7 +1163,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, } /* If valid entry but not extended calculate the index into Table8. */ else if (lpm->tbl24[tbl24_index].valid_group == 0) { /* Search for free tbl8 group. */ - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s); + tbl8_group_index = tbl8_alloc_v1604(lpm); if (tbl8_group_index < 0) { return tbl8_group_index; @@ -1818,7 +1898,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, */ lpm->tbl24[tbl24_index].valid = 0; __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free_v1604(lpm->tbl8, tbl8_group_start); + tbl8_free_v1604(lpm, tbl8_group_start); } else if (tbl8_recycle_index > -1) { /* Update tbl24 entry. */ struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1834,7 +1914,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, __ATOMIC_RELAXED); __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free_v1604(lpm->tbl8, tbl8_group_start); + tbl8_free_v1604(lpm, tbl8_group_start); } #undef group_idx return 0; diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index 906ec4483..49c12a68d 100644 --- a/lib/librte_lpm/rte_lpm.h +++ b/lib/librte_lpm/rte_lpm.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2019 Arm Limited */ #ifndef _RTE_LPM_H_ @@ -21,6 +22,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -186,6 +188,7 @@ struct rte_lpm { __rte_cache_aligned; /**< LPM tbl24 table. */ struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */ struct rte_lpm_rule *rules_tbl; /**< LPM rules. */ + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue.*/ }; /** @@ -248,6 +251,24 @@ rte_lpm_free_v20(struct rte_lpm_v20 *lpm); void rte_lpm_free_v1604(struct rte_lpm *lpm); +/** + * Associate RCU QSBR variable with an LPM object. + * + * @param lpm + * the lpm object to add RCU QSBR + * @param v + * RCU QSBR variable + * @return + * On success - 0 + * On error - 1 with error code set in rte_errno. + * Possible rte_errno codes are: + * - EINVAL - invalid pointer + * - EEXIST - already added QSBR + * - ENOMEM - memory allocation failure + */ +__rte_experimental +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr *v); + /** * Add a rule to the LPM table. * diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map index 90beac853..b353aabd2 100644 --- a/lib/librte_lpm/rte_lpm_version.map +++ b/lib/librte_lpm/rte_lpm_version.map @@ -44,3 +44,9 @@ DPDK_17.05 { rte_lpm6_lookup_bulk_func; } DPDK_16.04; + +EXPERIMENTAL { + global: + + rte_lpm_rcu_qsbr_add; +};