From patchwork Thu Feb 1 11:00:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 126437 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1596283ljc; Thu, 1 Feb 2018 03:01:27 -0800 (PST) X-Google-Smtp-Source: AH8x226kGQBG1ww0fn7BC6HwliNg4zJBfwHP38LTxvwdF8yfI9GBNiKtf4Vi5/65x6zokeqPk/y+ X-Received: by 10.55.198.155 with SMTP id s27mr8788041qkl.284.1517482887358; Thu, 01 Feb 2018 03:01:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517482887; cv=none; d=google.com; s=arc-20160816; b=eXYxG1A8spfpQoGTybpZbxtqhrzBR3yh2RtXuuEcHJCT/p9RBYYVsbDL1WPPYd50Ye bd1IUYCoAmZ38tr1WoN0k5B58zZnTnDA5gT8OMSUY4zJ79lINlneWqdBOKHJzSbuv+ca eRkXX/eBzw4/8a7/uBZMLWaeKqOzUinCRgY/ttSy7Fauab099fpjmhAM0XsrVIhjSjn6 wCwLar9ueAy0oEL2YOeTRGMmBpKOgZNQWVWCCMqylRv9VaVscO2JmnF5RMcFIgX49bjZ D6rIA9LTRSgBw6AqdlHps1/nKYS6W6oAjwt9SFhnlKd3igJ6mVCTCovgRrmzsAxdduv3 RQ2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=h9LKwRht3iMoLA+yzAn5PyyFCai8WkDvbZe42HPe5JI=; b=S2vHF7GrKKSRCFLFQ6gMxnplPuK+Bb1sfWUGoI9S+9gXaPFFo0i9YSgWbfg2/aAZFU hzUVLSmRw6Lpt9gIfA+8epTs3j9qWPgzQEsli9rw/GOss6D408lApVK5lZ48pfRjAbzH 0bXHmovzSkvmRyOCbrLrp+3AAT+1zT+iWdVrqFoJCuW8tBoa9GRcfJW+rc3e+quzFgcp uOM8X7gNwrmGdl20EbkZLJKE1KwI47hapTDcnomCKc7lWI4cxqMcsccsmjA+Omem5CSj +hecuqV54cvaPG5kauUZNZu7syQPmCZFk/j8K6Wjlxb3O8JYfL/7zE9O82pdyvM4cvs+ O7IQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id p5si1829721qtf.17.2018.02.01.03.01.27; Thu, 01 Feb 2018 03:01:27 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 0B2FF61737; Thu, 1 Feb 2018 11:01:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2 autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id A438F61750; Thu, 1 Feb 2018 11:00:29 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5D38661726; Thu, 1 Feb 2018 11:00:19 +0000 (UTC) Received: from forward101o.mail.yandex.net (forward101o.mail.yandex.net [37.140.190.181]) by lists.linaro.org (Postfix) with ESMTPS id 95BA5608FA for ; Thu, 1 Feb 2018 11:00:12 +0000 (UTC) Received: from mxback9o.mail.yandex.net (mxback9o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::23]) by forward101o.mail.yandex.net (Yandex) with ESMTP id ACF361344347 for ; Thu, 1 Feb 2018 14:00:10 +0300 (MSK) Received: from smtp3p.mail.yandex.net (smtp3p.mail.yandex.net [2a02:6b8:0:1472:2741:0:8b6:8]) by mxback9o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id YWAz8LckSL-0APiAe4D; Thu, 01 Feb 2018 14:00:10 +0300 Received: by smtp3p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id MwphEFp9pB-0A4KW0Hn; Thu, 01 Feb 2018 14:00:10 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 1 Feb 2018 14:00:08 +0300 Message-Id: <1517482808-27865-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1517482808-27865-1-git-send-email-odpbot@yandex.ru> References: <1517482808-27865-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 442 Subject: [lng-odp] [PATCH CATERPILLAR v1 1/1] linux-gen: add cxgbe RX slot mode support X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Mykyta Iziumtsev To use RX slot_mode one needs to redefine CXGB4_RX_BUF_SIZE_INDEX=2 either in cxgbe4.c locally or via CFLAGS in ./configure One needs supplementary patch on kernel driver applied as well as load kernel driver with rx_mode=1 parameter. Signed-off-by: Mykyta Iziumtsev --- /** Email created from pull request 442 (MykytaI:caterpillar_mdev_cxgbe_slot_rx) ** https://github.com/Linaro/odp/pull/442 ** Patch: https://github.com/Linaro/odp/pull/442.patch ** Base sha: 30009156994928b0ad9f6c8932c20abef269c098 ** Merge commit sha: 932b4f9676d8a6fdee0cd70841c2f33720105073 **/ platform/linux-generic/pktio/mdev/cxgb4.c | 106 +++++++++++++++--------------- 1 file changed, 52 insertions(+), 54 deletions(-) diff --git a/platform/linux-generic/pktio/mdev/cxgb4.c b/platform/linux-generic/pktio/mdev/cxgb4.c index 8a34c2399..cef4806fb 100644 --- a/platform/linux-generic/pktio/mdev/cxgb4.c +++ b/platform/linux-generic/pktio/mdev/cxgb4.c @@ -36,6 +36,25 @@ /* RX queue definitions */ #define CXGB4_RX_QUEUE_NUM_MAX 32 +/* RX buffer size indices for firmware */ +#define CXGB4_RX_BUF_SIZE_INDEX_4K 0x0UL +#define CXGB4_RX_BUF_SIZE_INDEX_64K 0x1UL +#define CXGB4_RX_BUF_SIZE_INDEX_1500 0x2UL +#define CXGB4_RX_BUF_SIZE_INDEX_9000 0x3UL + +/* Default RX buffer size index to use */ +#ifndef CXGB4_RX_BUF_SIZE_INDEX +#define CXGB4_RX_BUF_SIZE_INDEX CXGB4_RX_BUF_SIZE_INDEX_4K +#endif /* CXGB4_RX_BUF_SIZE_INDEX */ + +#if CXGB4_RX_BUF_SIZE_INDEX == CXGB4_RX_BUF_SIZE_INDEX_4K +#define CXGB4_RX_BUF_SIZE 4096UL +#elif CXGB4_RX_BUF_SIZE_INDEX == CXGB4_RX_BUF_SIZE_INDEX_1500 +#define CXGB4_RX_BUF_SIZE 2048UL +#else /* CXGB4_RX_BUF_SIZE_INDEX */ +#error "Support for requested RX buffer size isn't implemented" +#endif /* CXGB4_RX_BUF_SIZE_INDEX */ + /** RX descriptor */ typedef struct { uint32_t padding[12]; @@ -72,18 +91,15 @@ typedef struct ODP_ALIGNED_CACHE { uint32_t doorbell_desc_key; /**< 'Key' to the doorbell */ uint16_t rx_queue_len; /**< Number of RX desc entries */ - uint16_t rx_next; /**< Next RX desc to handle */ + uint16_t cidx; /**< Next RX desc to handle */ uint32_t gen:1; /**< RX queue generation */ odp_u64be_t *free_list; /**< Free list base */ - - uint8_t free_list_len; /**< Number of free list entries */ - uint8_t commit_pending; /**< Free list entries pending commit */ - - uint8_t cidx; /**< Free list consumer index */ - uint8_t pidx; /**< Free list producer index */ - + uint16_t free_list_len; /**< Number of free list entries */ + uint16_t commit_pending; /**< Free list entries pending commit */ + uint16_t free_list_cidx; /**< Free list consumer index */ + uint16_t free_list_pidx; /**< Free list producer index */ uint32_t offset; /**< Offset into last free fragment */ mdev_dma_area_t rx_data; /**< RX packet payload area */ @@ -214,7 +230,7 @@ typedef struct { mdev_device_t mdev; /**< Common mdev data */ } pktio_ops_cxgb4_data_t; -static void cxgb4_rx_refill(cxgb4_rx_queue_t *rxq, uint8_t num); +static void cxgb4_rx_refill(cxgb4_rx_queue_t *rxq, uint16_t num); static void cxgb4_wait_link_up(pktio_entry_t *pktio_entry); static int cxgb4_close(pktio_entry_t *pktio_entry); @@ -236,7 +252,8 @@ static int cxgb4_mmio_register(pktio_ops_cxgb4_data_t *pkt_cxgb4, static int cxgb4_rx_queue_register(pktio_ops_cxgb4_data_t *pkt_cxgb4, uint64_t offset, uint64_t size, - uint64_t free_list_offset) + uint64_t free_list_offset, + uint64_t free_list_size) { uint16_t rxq_idx = pkt_cxgb4->capa.max_input_queues++; cxgb4_rx_queue_t *rxq = &pkt_cxgb4->rx_queues[rxq_idx]; @@ -305,17 +322,14 @@ static int cxgb4_rx_queue_register(pktio_ops_cxgb4_data_t *pkt_cxgb4, return -1; } - ODP_ASSERT(rxq->free_list_len * sizeof(*rxq->free_list) <= - ODP_PAGE_SIZE); - - rxq->free_list = - mdev_region_mmap(&pkt_cxgb4->mdev, free_list_offset, ODP_PAGE_SIZE); + rxq->free_list = mdev_region_mmap(&pkt_cxgb4->mdev, free_list_offset, + free_list_size); if (rxq->free_list == MAP_FAILED) { ODP_ERR("Cannot mmap RX queue free list\n"); return -1; } - rxq->rx_data.size = rxq->free_list_len * ODP_PAGE_SIZE; + rxq->rx_data.size = rxq->free_list_len * CXGB4_RX_BUF_SIZE; ret = mdev_dma_area_alloc(&pkt_cxgb4->mdev, &rxq->rx_data); if (ret) { ODP_ERR("Cannot allocate RX queue DMA area\n"); @@ -329,7 +343,7 @@ static int cxgb4_rx_queue_register(pktio_ops_cxgb4_data_t *pkt_cxgb4, * otherwise HW will think the free list is empty. */ cxgb4_rx_refill(rxq, rxq->free_list_len - 8); - rxq->cidx = rxq->free_list_len - 1; + rxq->free_list_cidx = rxq->free_list_len - 1; ODP_DBG("Register RX queue region: 0x%llx@%016llx\n", size, offset); ODP_DBG(" RX descriptors: %u\n", rxq->rx_queue_len); @@ -450,12 +464,11 @@ static int cxgb4_region_info_cb(mdev_device_t *mdev, return -1; } - ODP_ASSERT(sparse->areas[1].size == ODP_PAGE_SIZE); - return cxgb4_rx_queue_register(pkt_cxgb4, sparse->areas[0].offset, sparse->areas[0].size, - sparse->areas[1].offset); + sparse->areas[1].offset, + sparse->areas[1].size); case VFIO_NET_MDEV_TX_RING: return cxgb4_tx_queue_register(pkt_cxgb4, @@ -579,18 +592,20 @@ static int cxgb4_close(pktio_entry_t *pktio_entry) return 0; } -static void cxgb4_rx_refill(cxgb4_rx_queue_t *rxq, uint8_t num) +static void cxgb4_rx_refill(cxgb4_rx_queue_t *rxq, uint16_t num) { rxq->commit_pending += num; while (num) { - uint64_t iova = rxq->rx_data.iova + rxq->pidx * ODP_PAGE_SIZE; + uint64_t iova = rxq->rx_data.iova + + rxq->free_list_pidx * CXGB4_RX_BUF_SIZE; - rxq->free_list[rxq->pidx] = odp_cpu_to_be_64(iova); + rxq->free_list[rxq->free_list_pidx] = + odp_cpu_to_be_64(iova | CXGB4_RX_BUF_SIZE_INDEX); - rxq->pidx++; - if (odp_unlikely(rxq->pidx >= rxq->free_list_len)) - rxq->pidx = 0; + rxq->free_list_pidx++; + if (odp_unlikely(rxq->free_list_pidx >= rxq->free_list_len)) + rxq->free_list_pidx = 0; num--; } @@ -618,7 +633,7 @@ static int cxgb4_recv(pktio_entry_t *pktio_entry, odp_ticketlock_lock(&rxq->lock); while (rx_pkts < num) { - volatile cxgb4_rx_desc_t *rxd = &rxq->rx_descs[rxq->rx_next]; + volatile cxgb4_rx_desc_t *rxd = &rxq->rx_descs[rxq->cidx]; odp_packet_t pkt; odp_packet_hdr_t *pkt_hdr; uint32_t pkt_len; @@ -626,24 +641,6 @@ static int cxgb4_recv(pktio_entry_t *pktio_entry, if (RX_DESC_TO_GEN(rxd) != rxq->gen) break; - /* - * RX queue shall receive only packet descriptors, so this - * condition shall never ever become true. Still, we try to - * be on a safe side and gracefully skip unexpected descriptor. - */ - if (odp_unlikely(RX_DESC_TO_TYPE(rxd) != - RX_DESC_TYPE_FLBUF_X)) { - ODP_ERR("Invalid rxd type %u\n", RX_DESC_TO_TYPE(rxd)); - - rxq->rx_next++; - if (odp_unlikely(rxq->rx_next >= rxq->rx_queue_len)) { - rxq->rx_next = 0; - rxq->gen ^= 1; - } - - continue; - } - pkt_len = odp_be_to_cpu_32(rxd->pldbuflen_qid); pkt = @@ -657,9 +654,10 @@ static int cxgb4_recv(pktio_entry_t *pktio_entry, * next one from the beginning. */ if (pkt_len & RX_DESC_NEW_BUF_FLAG) { - rxq->cidx++; - if (odp_unlikely(rxq->cidx >= rxq->free_list_len)) - rxq->cidx = 0; + rxq->free_list_cidx++; + if (odp_unlikely(rxq->free_list_cidx >= + rxq->free_list_len)) + rxq->free_list_cidx = 0; rxq->offset = 0; refill_count++; @@ -668,15 +666,15 @@ static int cxgb4_recv(pktio_entry_t *pktio_entry, } /* TODO: gracefully put pktio into error state */ - if (odp_unlikely(rxq->offset + pkt_len > ODP_PAGE_SIZE)) + if (odp_unlikely(rxq->offset + pkt_len > CXGB4_RX_BUF_SIZE)) ODP_ABORT("Packet write beyond buffer boundary\n"); rxq->offset += ROUNDUP_ALIGN(pkt_len, pkt_cxgb4->free_list_align); - rxq->rx_next++; - if (odp_unlikely(rxq->rx_next >= rxq->rx_queue_len)) { - rxq->rx_next = 0; + rxq->cidx++; + if (odp_unlikely(rxq->cidx >= rxq->rx_queue_len)) { + rxq->cidx = 0; rxq->gen ^= 1; } @@ -686,8 +684,8 @@ static int cxgb4_recv(pktio_entry_t *pktio_entry, */ odp_packet_copy_from_mem(pkt, 0, pkt_len - 2, (uint8_t *)rxq->rx_data.vaddr + - rxq->cidx * ODP_PAGE_SIZE + - rxq->offset + 2); + rxq->free_list_cidx * + CXGB4_RX_BUF_SIZE + rxq->offset + 2); pkt_hdr = odp_packet_hdr(pkt); pkt_hdr->input = pktio_entry->s.handle;