From patchwork Fri Oct 23 04:43:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318910 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp126090ilg; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0InJyzGkn5UzF6wMv0hr8HXNGhRd/H/il8MhguUwFsXCKHPSYMOvOT9lmi+A6fbkjb8p5 X-Received: by 2002:a17:906:bc50:: with SMTP id s16mr226838ejv.275.1603428323535; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428323; cv=none; d=google.com; s=arc-20160816; b=YZpGee327x2rR0haHySLVYCqkvR7kSHyl1YhqjtsqUgzKbOrUMOF+MKas99SRSGAgj W+aLX7e/louQXC2kMVoqxiNdCkdzPWI9Dy2FYMXpEWeJoZlsILxwZPNL8claet9cAUp8 zERh5ISK+A10hG/GqBQ9MHOvqLibuj9pDlP0pO4bKVdWKUgk28OFzjsTGhioKAOZmiT/ 6wCGuTd7mg5tI5IjZdvRsjUwdy3Ado/+mqzETcxso0kdLH9IE1nCMs1cFGLzT3hGWRdN VgWwyFJInbPQTmJ6/m29fbpBBgBaTJKcbDeaxOiMe3vIVOCSMKZAsnxIF4Fj7dJ25G6l IK6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=KKqBseNvo2ieO6wKiwDms+pQu6fvqMwsflOQTCmyed0=; b=R/bEZAGkeU+04z7iIHXxAZ1yT64tnCS2t+JpEKxt6QFBWFlE2uz0Bf0AxylqlPrvM3 +EFzXQPH8TR3yguNE+kPBxDofKi2HKNg1lAFYWj8IZQ7yWNgkYTC911PwyD0oJbVFZ4c O+Jv+vWKXFJbzLEaLfNI21DbbfrsI133jLKfak1an4/H8Md1sdlZVNqrKA2g+5lfo83g 5UjKY7Gd/YE9ar6NPjlSchfVWX/SNf0t6ciePYdSTafWFiObYbPfBZVlABO2WzRDus87 qQiKv6gNebZFrXyoJlxHD4EIeugtxWV6HvbtZAyv0BdDEAPfTp+BB7rvDXjA24LQGreF B5pg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id mj13si129007ejb.441.2020.10.22.21.45.23; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3A08F72DF; Fri, 23 Oct 2020 06:44:27 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 735926CAD for ; Fri, 23 Oct 2020 06:44:23 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F18C8142F; Thu, 22 Oct 2020 21:44:21 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E66CC3F66B; Thu, 22 Oct 2020 21:44:21 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com Date: Thu, 22 Oct 2020 23:43:42 -0500 Message-Id: <20201023044343.13462-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 4/5] test/ring: add functional tests for zero copy APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functional tests for zero copy APIs. Test enqueue/dequeue functions are created using the zero copy APIs to fit into the existing testing method. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- app/test/test_ring.c | 196 +++++++++++++++++++++++++++++++++++++++++++ app/test/test_ring.h | 42 ++++++++++ 2 files changed, 238 insertions(+) -- 2.17.1 Acked-by: Konstantin Ananyev diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 329d538a9..99fe4b46f 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2020 Arm Limited */ #include @@ -68,6 +69,149 @@ static const int esize[] = {-1, 4, 8, 16, 20}; +/* Wrappers around the zero-copy APIs. The wrappers match + * the normal enqueue/dequeue API declarations. + */ +static unsigned int +test_ring_enqueue_zc_bulk(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + static const struct { const char *desc; uint32_t api_type; @@ -219,6 +363,58 @@ static const struct { .felem = rte_ring_dequeue_burst_elem, }, }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + } }; static unsigned int diff --git a/app/test/test_ring.h b/app/test/test_ring.h index 16697ee02..33c8a31fe 100644 --- a/app/test/test_ring.h +++ b/app/test/test_ring.h @@ -53,6 +53,48 @@ test_ring_inc_ptr(void **obj, int esize, unsigned int n) (n * esize / sizeof(uint32_t))); } +static inline void +test_ring_mem_copy(void *dst, void * const *src, int esize, unsigned int num) +{ + size_t temp_sz; + + temp_sz = num * sizeof(void *); + if (esize != -1) + temp_sz = esize * num; + + memcpy(dst, src, temp_sz); +} + +/* Copy to the ring memory */ +static inline void +test_ring_copy_to(struct rte_ring_zc_data *zcd, void * const *src, int esize, + unsigned int num) +{ + test_ring_mem_copy(zcd->ptr1, src, esize, zcd->n1); + if (zcd->n1 != num) { + if (esize == -1) + src = src + zcd->n1; + else + src = (void * const *)(((const uint32_t *)src) + + (zcd->n1 * esize / sizeof(uint32_t))); + test_ring_mem_copy(zcd->ptr2, src, + esize, num - zcd->n1); + } +} + +/* Copy from the ring memory */ +static inline void +test_ring_copy_from(struct rte_ring_zc_data *zcd, void *dst, int esize, + unsigned int num) +{ + test_ring_mem_copy(dst, zcd->ptr1, esize, zcd->n1); + + if (zcd->n1 != num) { + dst = test_ring_inc_ptr(dst, esize, zcd->n1); + test_ring_mem_copy(dst, zcd->ptr2, esize, num - zcd->n1); + } +} + static __rte_always_inline unsigned int test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, unsigned int api_type)