From patchwork Thu Feb 22 10:00:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 129180 Delivered-To: patch@linaro.org Received: by 10.46.66.2 with SMTP id p2csp427751lja; Thu, 22 Feb 2018 02:12:42 -0800 (PST) X-Google-Smtp-Source: AG47ELthJP0+b4qxqtwKDmz9xJlbr0BU6NkQNCAw7RMFgZcgIUT6HMMn59VwG61Dtk9sHwMvYOrY X-Received: by 10.55.189.196 with SMTP id n187mr9940463qkf.141.1519294362547; Thu, 22 Feb 2018 02:12:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519294362; cv=none; d=google.com; s=arc-20160816; b=xzEHjoHkUwEk1w2EUxpBbb3uQCRxto+tUEO6ZHctZBiqpoaSmdoZs9JNjDJqMoQf3m QTgy8fe7L3mxv5ZvYMBXadAY36viZ2SAfBP7hZ/QztHBQcOVgWDMZr5/lcx8fhaLL52S GlviHWlEIe01ziy09LwRDcXd0zNqETOKbVNuL3+9ZV234f3Jfrgc28LPtKE4iGS2LM0G ppt6bnZadb7k+AawGYLBcRmXR2ZD3j7GWlBQpwalE8fasOdYzX/ctTLxM+FWgOD75l7Z VzKpqsdqBwugt+/ipJubmTdqq8SCHEwOg/soRympH+9Phpjvp0yAZyxthLRyr5EPNtyU nMNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=fP2uFU+Ku9tdm6Pdwr7svoM2xphZoIFHjkeG5RfW0/o=; b=C4Zu4QU5cVJgkC6yNckept/OKnGqm26NhrDD9GlQPxPrQeCfynp11dwU8BWgS/InfQ zR4geQWsl4+zPuD7EfKRj5v0HbgrBr3MvQEnrcI57qc7wrMl1gRs17r1iRJogKBm3N5M LVWkwmlw1SebYb5Xr6YqY1KudbLf4NZrvsWHMtn8yVpLLTDbly6GQ3NKSYGl67vBFf3E gb2Yh3lfpaVbPCZ7xbuCoH4tiDLduwi9AAe/8b3ac9ntJbQi8As3ezQgp+MzFx0KhZoj Hd4WihO5pbyq0VOmqgERlypUW+SAJsPVrUlZhasTJ8onxKXLAT9gwDvA0DmRqkoDqfjq fnZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id d53si1400721qta.35.2018.02.22.02.12.42; Thu, 22 Feb 2018 02:12:42 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 3D1BB61739; Thu, 22 Feb 2018 10:12:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 6BDAE61773; Thu, 22 Feb 2018 10:01:55 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 698AC6173F; Thu, 22 Feb 2018 10:01:30 +0000 (UTC) Received: from forward102o.mail.yandex.net (forward102o.mail.yandex.net [37.140.190.182]) by lists.linaro.org (Postfix) with ESMTPS id DB96860C9D for ; Thu, 22 Feb 2018 10:00:31 +0000 (UTC) Received: from mxback2j.mail.yandex.net (mxback2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::10b]) by forward102o.mail.yandex.net (Yandex) with ESMTP id 664DC5A034AB for ; Thu, 22 Feb 2018 13:00:30 +0300 (MSK) Received: from smtp4p.mail.yandex.net (smtp4p.mail.yandex.net [2a02:6b8:0:1402::15:6]) by mxback2j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id D3TVLZGUIb-0UGCbdNT; Thu, 22 Feb 2018 13:00:30 +0300 Received: by smtp4p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id azPEnBhLoH-0TvaG7Gd; Thu, 22 Feb 2018 13:00:29 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 22 Feb 2018 13:00:22 +0300 Message-Id: <1519293622-14665-11-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519293622-14665-1-git-send-email-odpbot@yandex.ru> References: <1519293622-14665-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 492 Subject: [lng-odp] [PATCH v2 10/10] test: l2fwd: increase num pkt and honour pool capability X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Increase number of packets to 16k as 8k packets limit throughput on 40Gbit testing. Also limit packet count and length to pool capability maximums when needed. Signed-off-by: Petri Savolainen --- /** Email created from pull request 492 (psavol:master-sched-optim) ** https://github.com/Linaro/odp/pull/492 ** Patch: https://github.com/Linaro/odp/pull/492.patch ** Base sha: 5a58bbf2bb331fd7dde2ebbc0430634ace6900fb ** Merge commit sha: b29563293c1bca56419d2dc355a8e64d961e024a **/ test/performance/odp_l2fwd.c | 44 ++++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/test/performance/odp_l2fwd.c b/test/performance/odp_l2fwd.c index 0a76d8b67..b2d380984 100644 --- a/test/performance/odp_l2fwd.c +++ b/test/performance/odp_l2fwd.c @@ -28,10 +28,10 @@ #define MAX_WORKERS 32 /* Size of the shared memory block */ -#define SHM_PKT_POOL_SIZE 8192 +#define POOL_PKT_NUM (16 * 1024) /* Buffer size of the packet pool buffer */ -#define SHM_PKT_POOL_BUF_SIZE 1856 +#define POOL_PKT_LEN 1536 /* Maximum number of packet in a burst */ #define MAX_PKT_BURST 32 @@ -663,7 +663,7 @@ static int create_pktio(const char *dev, int idx, int num_rx, int num_tx, odp_pktio_t pktio; odp_pktio_param_t pktio_param; odp_schedule_sync_t sync_mode; - odp_pktio_capability_t capa; + odp_pktio_capability_t pktio_capa; odp_pktio_config_t config; odp_pktin_queue_param_t pktin_param; odp_pktout_queue_param_t pktout_param; @@ -699,8 +699,8 @@ static int create_pktio(const char *dev, int idx, int num_rx, int num_tx, if (gbl_args->appl.verbose) odp_pktio_print(pktio); - if (odp_pktio_capability(pktio, &capa)) { - LOG_ERR("Error: capability query failed %s\n", dev); + if (odp_pktio_capability(pktio, &pktio_capa)) { + LOG_ERR("Error: pktio capability query failed %s\n", dev); return -1; } @@ -739,17 +739,17 @@ static int create_pktio(const char *dev, int idx, int num_rx, int num_tx, pktin_param.queue_param.sched.group = group; } - if (num_rx > (int)capa.max_input_queues) { + if (num_rx > (int)pktio_capa.max_input_queues) { printf("Sharing %i input queues between %i workers\n", - capa.max_input_queues, num_rx); - num_rx = capa.max_input_queues; + pktio_capa.max_input_queues, num_rx); + num_rx = pktio_capa.max_input_queues; mode_rx = ODP_PKTIO_OP_MT; } - if (num_tx > (int)capa.max_output_queues) { + if (num_tx > (int)pktio_capa.max_output_queues) { printf("Sharing %i output queues between %i workers\n", - capa.max_output_queues, num_tx); - num_tx = capa.max_output_queues; + pktio_capa.max_output_queues, num_tx); + num_tx = pktio_capa.max_output_queues; mode_tx = ODP_PKTIO_OP_MT; } @@ -1446,6 +1446,8 @@ int main(int argc, char *argv[]) int num_groups; odp_schedule_group_t group[MAX_PKTIOS]; odp_init_t init; + odp_pool_capability_t pool_capa; + uint32_t pkt_len, pkt_num; odp_init_param_init(&init); @@ -1525,11 +1527,25 @@ int main(int argc, char *argv[]) exit(EXIT_FAILURE); } + if (odp_pool_capability(&pool_capa)) { + LOG_ERR("Error: pool capability failed\n"); + return -1; + } + + pkt_len = POOL_PKT_LEN; + pkt_num = POOL_PKT_NUM; + + if (pool_capa.pkt.max_len && pkt_len > pool_capa.pkt.max_len) + pkt_len = pool_capa.pkt.max_len; + + if (pool_capa.pkt.max_num && pkt_num > pool_capa.pkt.max_num) + pkt_num = pool_capa.pkt.max_num; + /* Create packet pool */ odp_pool_param_init(¶ms); - params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.num = SHM_PKT_POOL_SIZE; + params.pkt.seg_len = pkt_len; + params.pkt.len = pkt_len; + params.pkt.num = pkt_num; params.type = ODP_POOL_PACKET; pool = odp_pool_create("packet pool", ¶ms);