From patchwork Fri Mar 6 09:51:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 45477 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f69.google.com (mail-wg0-f69.google.com [74.125.82.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1024A204BC for ; Fri, 6 Mar 2015 09:51:51 +0000 (UTC) Received: by wggx13 with SMTP id x13sf8639583wgg.0 for ; Fri, 06 Mar 2015 01:51:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=uz+dOFnj3bJ88wakVoBPXaQdxG0jVlNEyDT77XbGbMg=; b=f+ZcIUZ9Rd0JkH2fpgtzL7OgGmi9LHk8E4GfQkSlmkN5gAXwdgdrmvZPFNWi+oUOvD cRJvghXsTlF7D3W6F6zOHeaZH5Z1bwPE+3XKfAoc+5mcaQqhACBuwMM+CdphKDK4qPeq cXjmQq8Me5EpUCw/0w8dZjrQY0Mli/iduhMQJgXc1ziBlmYypuLk0mKFamISGCyM9wPu z0AxRCn2YxY2a+vJSG6WDZkCeaIlXBfc+H5qpVLdtOUsCaRpnN65BCfdHXu/nxnbC4fX V9uwE/dwVLOGZzWf5LRFjhYO3h8qLpNjXNEjtMK3tkYIu+JTeqKfQ9U7LYxOIphbjVIP VTCA== X-Gm-Message-State: ALoCoQkMGT15BWVFqgptKUBEEbHLbqTPdZtYC2BEvJS8z6PjkunY9+g6phsliKiU7zCXqHarai4w X-Received: by 10.152.27.134 with SMTP id t6mr2165323lag.5.1425635510272; Fri, 06 Mar 2015 01:51:50 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.9.168 with SMTP id a8ls48347lab.14.gmail; Fri, 06 Mar 2015 01:51:50 -0800 (PST) X-Received: by 10.152.87.115 with SMTP id w19mr6602395laz.66.1425635510020; Fri, 06 Mar 2015 01:51:50 -0800 (PST) Received: from mail-la0-f41.google.com (mail-la0-f41.google.com. [209.85.215.41]) by mx.google.com with ESMTPS id ep10si6677830lac.22.2015.03.06.01.51.50 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 06 Mar 2015 01:51:50 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.41 as permitted sender) client-ip=209.85.215.41; Received: by labgf13 with SMTP id gf13so31961441lab.10 for ; Fri, 06 Mar 2015 01:51:49 -0800 (PST) X-Received: by 10.112.172.229 with SMTP id bf5mr11794236lbc.72.1425635509903; Fri, 06 Mar 2015 01:51:49 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1024843lbj; Fri, 6 Mar 2015 01:51:49 -0800 (PST) X-Received: by 10.140.97.38 with SMTP id l35mr17647100qge.47.1425635508494; Fri, 06 Mar 2015 01:51:48 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id i72si6938158qkh.83.2015.03.06.01.51.47 (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 06 Mar 2015 01:51:48 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YTov3-00045u-23; Fri, 06 Mar 2015 09:51:45 +0000 Received: from mail-lb0-f169.google.com ([209.85.217.169]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YToux-00045p-BQ for lng-odp@lists.linaro.org; Fri, 06 Mar 2015 09:51:39 +0000 Received: by lbiw7 with SMTP id w7so9632125lbi.6 for ; Fri, 06 Mar 2015 01:51:33 -0800 (PST) X-Received: by 10.152.197.34 with SMTP id ir2mr12004639lac.36.1425635493446; Fri, 06 Mar 2015 01:51:33 -0800 (PST) Received: from maxim-Aspire-VN7-791.Dlink (broadband-46-188-125-102.2com.net. [46.188.125.102]) by mx.google.com with ESMTPSA id ay12sm1720973lab.6.2015.03.06.01.51.31 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 06 Mar 2015 01:51:32 -0800 (PST) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Fri, 6 Mar 2015 12:51:26 +0300 Message-Id: <1425635486-9936-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: patch Subject: [lng-odp] [PATCHv9] linux-generic: mmap: jumbo frames support X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.41 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Support for jumbo frames for linux-generic with unsegmented buffers. Test for pkio is also adjusted to work with 9*1024=9216 bytes packets. https://bugs.linaro.org/show_bug.cgi?id=509 Signed-off-by: Maxim Uvarov --- v9: - rename pool entry from pool to pool_entry v8: - fix test for invalid offset - define jumbo packet size as (9*1024) in test. v7: - add page add page align to for frame size. v6: - rewrite mmap_fill_ring to take into account pool settings. v5: - test_4_jumbo_pkts -> test_jumbo - do not use stack for jumbo packet, simple allocate it. v4: - fix work on real interfaces (make check under root) - better define jumbo packet payload size platform/linux-generic/odp_packet_socket.c | 43 +++++++++++--- test/validation/odp_pktio.c | 95 +++++++++++++++++++++++------- test/validation/odp_pktio_run | 4 +- 3 files changed, 110 insertions(+), 32 deletions(-) diff --git a/platform/linux-generic/odp_packet_socket.c b/platform/linux-generic/odp_packet_socket.c index 55c212e..e8626c6 100644 --- a/platform/linux-generic/odp_packet_socket.c +++ b/platform/linux-generic/odp_packet_socket.c @@ -109,6 +109,7 @@ static int set_pkt_sock_fanout_mmap(pkt_sock_mmap_t *const pkt_sock, fanout_group = (uint16_t) (sock_group_idx & 0xffff); val = (PACKET_FANOUT_HASH << 16) | fanout_group; + printf("\n\n fanout group %d\n\n\n", val); err = setsockopt(sockfd, SOL_PACKET, PACKET_FANOUT, &val, sizeof(val)); if (err != 0) { @@ -584,11 +585,35 @@ static inline unsigned pkt_mmap_v2_tx(int sock, struct ring *ring, return i; } -static void mmap_fill_ring(struct ring *ring, unsigned blocks) +static void mmap_fill_ring(struct ring *ring, odp_pool_t pool_hdl, int fanout) { - ring->req.tp_block_size = getpagesize() << 2; - ring->req.tp_frame_size = TPACKET_ALIGNMENT << 7; - ring->req.tp_block_nr = blocks; + /*@todo add Huge Pages support*/ + int pz = getpagesize(); + uint32_t pool_id = pool_handle_to_index(pool_hdl); + pool_entry_t *pool_entry = get_pool_entry(pool_id); + + if (pool_entry == NULL || pool_hdl == ODP_POOL_INVALID) + ODP_ABORT("NULL pool entry"); + + /* Frame has to capture full packet which can fit to the pool block.*/ + ring->req.tp_frame_size = (pool_entry->s.blk_size + + TPACKET_HDRLEN + TPACKET_ALIGNMENT + + + (pz - 1)) & (-pz); + + /* Calculate how many pages do we need to hold all pool packets + * and align size to page boundary. + */ + ring->req.tp_block_size = (ring->req.tp_frame_size * + pool_entry->s.buf_num + (pz - 1)) & (-pz); + + if (!fanout) { + /* Single socket is in use. Use 1 block with buf_num frames. */ + ring->req.tp_block_nr = 1; + } else { + /* Fanout is in use, more likely taffic split accodring to + * number of cpu threads. Use cpu blocks and buf_num frames. */ + ring->req.tp_block_nr = odp_cpu_count(); + } ring->req.tp_frame_nr = ring->req.tp_block_size / ring->req.tp_frame_size * ring->req.tp_block_nr; @@ -613,10 +638,10 @@ static int mmap_set_packet_loss_discard(int sock) return 0; } -static int mmap_setup_ring(int sock, struct ring *ring, int type) +static int mmap_setup_ring(int sock, struct ring *ring, int type, + odp_pool_t pool_hdl, int fanout) { int ret = 0; - unsigned blocks = 256; ring->sock = sock; ring->type = type; @@ -628,7 +653,7 @@ static int mmap_setup_ring(int sock, struct ring *ring, int type) return -1; } - mmap_fill_ring(ring, blocks); + mmap_fill_ring(ring, pool_hdl, fanout); ret = setsockopt(sock, SOL_PACKET, type, &ring->req, sizeof(ring->req)); if (ret == -1) { @@ -772,12 +797,12 @@ int setup_pkt_sock_mmap(pkt_sock_mmap_t *const pkt_sock, const char *netdev, return -1; ret = mmap_setup_ring(pkt_sock->sockfd, &pkt_sock->tx_ring, - PACKET_TX_RING); + PACKET_TX_RING, pool, fanout); if (ret != 0) return -1; ret = mmap_setup_ring(pkt_sock->sockfd, &pkt_sock->rx_ring, - PACKET_RX_RING); + PACKET_RX_RING, pool, fanout); if (ret != 0) return -1; diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c index 8df367d..dcb9fcf 100644 --- a/test/validation/odp_pktio.c +++ b/test/validation/odp_pktio.c @@ -15,6 +15,10 @@ #define PKT_BUF_NUM 32 #define PKT_BUF_SIZE 1856 +#define PKT_BUF_JUMBO_SIZE (9*1024) +#define PKT_BUF_JUMBO_MAX_PAYLOAD (PKT_BUF_JUMBO_SIZE -\ + (ODPH_UDPHDR_LEN +\ + ODPH_IPV4HDR_LEN + ODPH_ETHHDR_LEN)) #define MAX_NUM_IFACES 2 #define TEST_SEQ_INVALID ((uint32_t)~0) #define TEST_SEQ_MAGIC 0x92749451 @@ -33,12 +37,21 @@ typedef struct { odp_queue_t inq; } pktio_info_t; -/** structure of test packet UDP payload */ -typedef struct { +typedef struct ODP_PACKED { uint32be_t magic; uint32be_t seq; +} pkt_head_t; + +/** structure of test packet UDP payload */ +typedef struct ODP_PACKED { + pkt_head_t head; + char data[PKT_BUF_JUMBO_MAX_PAYLOAD - sizeof(pkt_head_t) - + sizeof(uint32be_t)]; + uint32be_t magic2; } pkt_test_data_t; +static int test_jumbo; + /** default packet pool */ odp_pool_t default_pkt_pool = ODP_POOL_INVALID; @@ -59,14 +72,18 @@ static void pktio_pkt_set_macs(odp_packet_t pkt, CU_ASSERT(ret == ODPH_ETHADDR_LEN); } +static uint32_t pkt_payload_len(void) +{ + return test_jumbo ? sizeof(pkt_test_data_t) : sizeof(pkt_head_t); +} + static int pktio_pkt_set_seq(odp_packet_t pkt) { static uint32_t tstseq; size_t l4_off; - pkt_test_data_t data; + pkt_test_data_t *data; + uint32_t len = pkt_payload_len(); - data.magic = TEST_SEQ_MAGIC; - data.seq = tstseq; l4_off = odp_packet_l4_offset(pkt); if (!l4_off) { @@ -74,9 +91,16 @@ static int pktio_pkt_set_seq(odp_packet_t pkt) return -1; } + data = calloc(1, len); + CU_ASSERT_FATAL(data != NULL); + + data->head.magic = TEST_SEQ_MAGIC; + data->magic2 = TEST_SEQ_MAGIC; + data->head.seq = tstseq; + odp_packet_copydata_in(pkt, l4_off+ODPH_UDPHDR_LEN, - sizeof(data), &data); - + len, data); + free(data); tstseq++; return 0; @@ -85,18 +109,30 @@ static int pktio_pkt_set_seq(odp_packet_t pkt) static uint32_t pktio_pkt_seq(odp_packet_t pkt) { size_t l4_off; - pkt_test_data_t data; + uint32_t seq = TEST_SEQ_INVALID; + pkt_test_data_t *data; + uint32_t len = pkt_payload_len(); l4_off = odp_packet_l4_offset(pkt); - if (l4_off) { - odp_packet_copydata_out(pkt, l4_off+ODPH_UDPHDR_LEN, - sizeof(data), &data); + if (l4_off == ODP_PACKET_OFFSET_INVALID) + return TEST_SEQ_INVALID; - if (data.magic == TEST_SEQ_MAGIC) - return data.seq; + data = calloc(1, len); + CU_ASSERT_FATAL(data != NULL); + + odp_packet_copydata_out(pkt, l4_off+ODPH_UDPHDR_LEN, + len, data); + + if (data->head.magic == TEST_SEQ_MAGIC) { + if (test_jumbo && data->magic2 != TEST_SEQ_MAGIC) { + free(data); + return TEST_SEQ_INVALID; + } + seq = data->head.seq; } - return TEST_SEQ_INVALID; + free(data); + return seq; } static odp_packet_t pktio_create_packet(void) @@ -107,7 +143,7 @@ static odp_packet_t pktio_create_packet(void) odph_udphdr_t *udp; char *buf; uint16_t seq; - size_t payload_len = sizeof(pkt_test_data_t); + size_t payload_len = pkt_payload_len(); uint8_t mac[ODPH_ETHADDR_LEN] = {0}; pkt = odp_packet_alloc(default_pkt_pool, payload_len + ODPH_UDPHDR_LEN + @@ -187,8 +223,8 @@ static int default_pool_create(void) return -1; memset(¶ms, 0, sizeof(params)); - params.pkt.seg_len = PKT_BUF_SIZE; - params.pkt.len = PKT_BUF_SIZE; + params.pkt.seg_len = PKT_BUF_JUMBO_SIZE; + params.pkt.len = PKT_BUF_JUMBO_SIZE; params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET; @@ -208,15 +244,24 @@ static odp_pktio_t create_pktio(const char *iface) odp_pool_param_t params; memset(¶ms, 0, sizeof(params)); - params.pkt.seg_len = PKT_BUF_SIZE; - params.pkt.len = PKT_BUF_SIZE; + if (test_jumbo) { + params.pkt.seg_len = PKT_BUF_JUMBO_SIZE; + params.pkt.len = PKT_BUF_JUMBO_SIZE; + + } else { + params.pkt.seg_len = PKT_BUF_SIZE; + params.pkt.len = PKT_BUF_SIZE; + } params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET; snprintf(pool_name, sizeof(pool_name), "pkt_pool_%s", iface); + pool = odp_pool_lookup(pool_name); - if (pool == ODP_POOL_INVALID) - pool = odp_pool_create(pool_name, ODP_SHM_NULL, ¶ms); + if (pool != ODP_POOL_INVALID) + odp_pool_destroy(pool); + + pool = odp_pool_create(pool_name, ODP_SHM_NULL, ¶ms); CU_ASSERT(pool != ODP_POOL_INVALID); pktio = odp_pktio_open(iface, pool); @@ -450,6 +495,13 @@ static void test_odp_pktio_sched_multi(void) pktio_test_txrx(ODP_QUEUE_TYPE_SCHED, 4); } +static void test_odp_pktio_jumbo(void) +{ + test_jumbo = 1; + test_odp_pktio_sched_multi(); + test_jumbo = 0; +} + static void test_odp_pktio_mtu(void) { int ret; @@ -668,6 +720,7 @@ CU_TestInfo pktio_tests[] = { {"pktio poll multi", test_odp_pktio_poll_multi}, {"pktio sched queues", test_odp_pktio_sched_queue}, {"pktio sched multi", test_odp_pktio_sched_multi}, + {"pktio jumbo frames", test_odp_pktio_jumbo}, {"pktio mtu", test_odp_pktio_mtu}, {"pktio promisc mode", test_odp_pktio_promisc}, {"pktio mac", test_odp_pktio_mac}, diff --git a/test/validation/odp_pktio_run b/test/validation/odp_pktio_run index 08288e6..b9d7e3c 100755 --- a/test/validation/odp_pktio_run +++ b/test/validation/odp_pktio_run @@ -56,8 +56,8 @@ setup_env1() echo "pktio: error: unable to create veth pair" exit $TEST_SKIPPED fi - ip link set $IF0 up - ip link set $IF1 up + ip link set $IF0 mtu 9216 up + ip link set $IF1 mtu 9216 up # network needs a little time to come up sleep 1