From patchwork Wed Nov 6 14:43:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 178721 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp723336ilf; Wed, 6 Nov 2019 07:00:31 -0800 (PST) X-Google-Smtp-Source: APXvYqyGoXE0AyLmk6ZjSLh19X3N9WSTmNWR0qzxIEifPYgmZbYxAvW8Sx4eidCPJuoW+OY5awiH X-Received: by 2002:a17:906:1249:: with SMTP id u9mr1387952eja.253.1573052431006; Wed, 06 Nov 2019 07:00:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573052431; cv=none; d=google.com; s=arc-20160816; b=n0gEhwMH0FSdeQ/pvhn4Tnd7wv2ew1yBhGSpQDpdcbiN1h3kwFrx1JCM6uhEi1HnCx C6R3xUKT5HKqBcm/1AUeuqaNga/GmDvYw+2i8t3lz05YxFv9R72l7sJBw5UpijWGYf3L lTbNdAa3OVXDe5jjGFu3ynJixjxyIhiO8IXDLUjrOvAHb/XSUkS///RktvRt+zDvI4In z85ejc1uMHWOx8P623gn0ORECNY0uzGt2XqHVPP8EzVv4hacJXh0vT6USgoz07HZpboz TXS1q3T65oZKN+zh3yMzTrwAPMUwLGZlGjKjMrprM67Qlxsg30h0B7HcEzwx39FUdnaz 7ZzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:message-id:date:cc:to :from; bh=3sXXL5vddUCXki7Sq+1RC7Pfi7YZSfzMHGdNRB3m+J0=; b=CkaT/ZFFPJUiDMBsdidazCuhePfxTHJ9947EqWG7QYUR6ckemqyHJOrbNcW+loLN62 ESX13AYZ62c3d9/quN/CI87gaCwhtPGYO6ICh/KmMrIZBy6resiUikfjJvIpmfcPMIQE Eqy6H9PaEfXeI4MUBjiurWSPgd4KB4puToHmJU0iaom7d2CBmfVujGjcoZvq82pa1lt5 ldBt0opKz/qol8WKQciDHIWhXqenrFVhe2h3MuqpttG6fzm8KOUw/DQd0hSHL1auSxpc m2+NPv/9R4D+BFmlPBdfUpumsS9UnvzxnsnUXZALpUmis/D9vAAQs/DXUMpYypGl6wbc SYMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id 19si13199097edz.85.2019.11.06.07.00.30; Wed, 06 Nov 2019 07:00:30 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC45D1C229; Wed, 6 Nov 2019 16:00:25 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id E3D6A1C206 for ; Wed, 6 Nov 2019 16:00:22 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C78F31A0632; Wed, 6 Nov 2019 16:00:22 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 70D141A014E; Wed, 6 Nov 2019 16:00:19 +0100 (CET) Received: from GDB1.ap.freescale.net (gdb1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 3E43B40285; Wed, 6 Nov 2019 23:00:15 +0800 (SGT) From: Nipun Gupta To: dev@dpdk.org Cc: thomas@monjalon.net, hemant.agrawal@nxp.com, Minghuan Lian , Sachin Saxena , Nipun Gupta Date: Wed, 6 Nov 2019 20:13:56 +0530 Message-Id: <20191106144357.4899-1-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 1/2] raw/dpaa2_qdma: add support for route by port in DMA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" RBP or route by ports can help in translating the DMA address over the PCI. Thus adding the RBP support with long and short formats Signed-off-by: Minghuan Lian Signed-off-by: Sachin Saxena Signed-off-by: Nipun Gupta --- .../bus/fslmc/qbman/include/fsl_qbman_base.h | 84 ++++ drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 399 ++++++++++++------ drivers/raw/dpaa2_qdma/dpaa2_qdma.h | 17 +- drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 9 + 4 files changed, 364 insertions(+), 145 deletions(-) -- 2.17.1 Acked-by: Hemant Agrawal diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h index 48bdaafa4..9323db370 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_base.h @@ -135,6 +135,90 @@ struct qbman_fd { uint32_t flc_lo; uint32_t flc_hi; } simple; + + struct qbman_fd_us_pci_simple { + uint32_t saddr_lo; + uint32_t saddr_hi; + + uint32_t len_sl:18; + uint32_t rsv1:14; + + uint32_t sportid:4; + uint32_t rsv2:22; + uint32_t bmt:1; + uint32_t rsv3:1; + uint32_t fmt:2; + uint32_t sl:1; + uint32_t rsv4:1; + + uint32_t acc_err:4; + uint32_t rsv5:4; + uint32_t ser:1; + uint32_t rsv6:3; + uint32_t wrttype:4; + uint32_t dqos:3; + uint32_t drbp:1; + uint32_t dlwc:2; + uint32_t rsv7:2; + uint32_t rdttype:4; + uint32_t sqos:3; + uint32_t srbp:1; + + uint32_t error:8; + uint32_t dportid:4; + uint32_t rsv8:5; + uint32_t dca:1; + uint32_t dat:2; + uint32_t dattr:3; + uint32_t dvfa:1; + uint32_t dtc:3; + uint32_t so:1; + uint32_t dd:4; + + uint32_t daddr_lo; + uint32_t daddr_hi; + } simple_pci; + struct qbman_fd_us_ddr_simple { + uint32_t saddr_lo; + + uint32_t saddr_hi:17; + uint32_t rsv1:15; + + uint32_t len; + + uint32_t rsv2:15; + uint32_t bmt:1; + uint32_t rsv3:12; + uint32_t fmt:2; + uint32_t sl:1; + uint32_t rsv4:1; + + uint32_t acc_err:4; + uint32_t rsv5:4; + uint32_t ser:1; + uint32_t rsv6:2; + uint32_t wns:1; + uint32_t wrttype:4; + uint32_t dqos:3; + uint32_t rsv12:1; + uint32_t dlwc:2; + uint32_t rsv7:1; + uint32_t rns:1; + uint32_t rdttype:4; + uint32_t sqos:3; + uint32_t rsv11:1; + + uint32_t error:8; + uint32_t rsv8:6; + uint32_t va:1; + uint32_t rsv9:13; + uint32_t dd:4; + + uint32_t daddr_lo; + + uint32_t daddr_hi:17; + uint32_t rsv10:15; + } simple_ddr; }; }; diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index af678273d..c90595400 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -54,6 +54,267 @@ typedef int (dpdmai_dev_dequeue_multijob_t)(struct dpaa2_dpdmai_dev *dpdmai_dev, dpdmai_dev_dequeue_multijob_t *dpdmai_dev_dequeue_multijob; +typedef uint16_t (dpdmai_dev_get_job_t)(const struct qbman_fd *fd, + struct rte_qdma_job **job); +typedef int (dpdmai_dev_set_fd_t)(struct qbman_fd *fd, + struct rte_qdma_job *job, + struct rte_qdma_rbp *rbp, + uint16_t vq_id); +dpdmai_dev_get_job_t *dpdmai_dev_get_job; +dpdmai_dev_set_fd_t *dpdmai_dev_set_fd; + +static inline int +qdma_populate_fd_pci(phys_addr_t src, phys_addr_t dest, + uint32_t len, struct qbman_fd *fd, + struct rte_qdma_rbp *rbp) +{ + fd->simple_pci.saddr_lo = lower_32_bits((uint64_t) (src)); + fd->simple_pci.saddr_hi = upper_32_bits((uint64_t) (src)); + + fd->simple_pci.len_sl = len; + + fd->simple_pci.bmt = 1; + fd->simple_pci.fmt = 3; + fd->simple_pci.sl = 1; + fd->simple_pci.ser = 1; + + fd->simple_pci.sportid = rbp->sportid; /*pcie 3 */ + fd->simple_pci.srbp = rbp->srbp; + if (rbp->srbp) + fd->simple_pci.rdttype = 0; + else + fd->simple_pci.rdttype = dpaa2_coherent_alloc_cache; + + /*dest is pcie memory */ + fd->simple_pci.dportid = rbp->dportid; /*pcie 3 */ + fd->simple_pci.drbp = rbp->drbp; + if (rbp->drbp) + fd->simple_pci.wrttype = 0; + else + fd->simple_pci.wrttype = dpaa2_coherent_no_alloc_cache; + + fd->simple_pci.daddr_lo = lower_32_bits((uint64_t) (dest)); + fd->simple_pci.daddr_hi = upper_32_bits((uint64_t) (dest)); + + return 0; +} + +static inline int +qdma_populate_fd_ddr(phys_addr_t src, phys_addr_t dest, + uint32_t len, struct qbman_fd *fd) +{ + fd->simple_ddr.saddr_lo = lower_32_bits((uint64_t) (src)); + fd->simple_ddr.saddr_hi = upper_32_bits((uint64_t) (src)); + + fd->simple_ddr.len = len; + + fd->simple_ddr.bmt = 1; + fd->simple_ddr.fmt = 3; + fd->simple_ddr.sl = 1; + fd->simple_ddr.ser = 1; + /** + * src If RBP=0 {NS,RDTTYPE[3:0]}: 0_1011 + * Coherent copy of cacheable memory, + * lookup in downstream cache, no allocate + * on miss + */ + fd->simple_ddr.rns = 0; + fd->simple_ddr.rdttype = dpaa2_coherent_alloc_cache; + /** + * dest If RBP=0 {NS,WRTTYPE[3:0]}: 0_0111 + * Coherent write of cacheable memory, + * lookup in downstream cache, no allocate on miss + */ + fd->simple_ddr.wns = 0; + fd->simple_ddr.wrttype = dpaa2_coherent_no_alloc_cache; + + fd->simple_ddr.daddr_lo = lower_32_bits((uint64_t) (dest)); + fd->simple_ddr.daddr_hi = upper_32_bits((uint64_t) (dest)); + + return 0; +} + +static void +dpaa2_qdma_populate_fle(struct qbman_fle *fle, + struct rte_qdma_rbp *rbp, + uint64_t src, uint64_t dest, + size_t len, uint32_t flags) +{ + struct qdma_sdd *sdd; + + sdd = (struct qdma_sdd *)((uint8_t *)(fle) + + (DPAA2_QDMA_MAX_FLE * sizeof(struct qbman_fle))); + + /* first frame list to source descriptor */ + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sdd)); + DPAA2_SET_FLE_LEN(fle, (2 * (sizeof(struct qdma_sdd)))); + + /* source and destination descriptor */ + if (rbp && rbp->enable) { + /* source */ + sdd->read_cmd.portid = rbp->sportid; + sdd->rbpcmd_simple.pfid = rbp->spfid; + sdd->rbpcmd_simple.vfid = rbp->svfid; + + if (rbp->srbp) { + sdd->read_cmd.rbp = rbp->srbp; + sdd->read_cmd.rdtype = DPAA2_RBP_MEM_RW; + } else { + sdd->read_cmd.rdtype = dpaa2_coherent_no_alloc_cache; + } + sdd++; + /* destination */ + sdd->write_cmd.portid = rbp->dportid; + sdd->rbpcmd_simple.pfid = rbp->dpfid; + sdd->rbpcmd_simple.vfid = rbp->dvfid; + + if (rbp->drbp) { + sdd->write_cmd.rbp = rbp->drbp; + sdd->write_cmd.wrttype = DPAA2_RBP_MEM_RW; + } else { + sdd->write_cmd.wrttype = dpaa2_coherent_alloc_cache; + } + + } else { + sdd->read_cmd.rdtype = dpaa2_coherent_no_alloc_cache; + sdd++; + sdd->write_cmd.wrttype = dpaa2_coherent_alloc_cache; + } + fle++; + /* source frame list to source buffer */ + if (flags & RTE_QDMA_JOB_SRC_PHY) { + DPAA2_SET_FLE_ADDR(fle, src); + DPAA2_SET_FLE_BMT(fle); + } else { + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(src)); + } + DPAA2_SET_FLE_LEN(fle, len); + + fle++; + /* destination frame list to destination buffer */ + if (flags & RTE_QDMA_JOB_DEST_PHY) { + DPAA2_SET_FLE_BMT(fle); + DPAA2_SET_FLE_ADDR(fle, dest); + } else { + DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(dest)); + } + DPAA2_SET_FLE_LEN(fle, len); + + /* Final bit: 1, for last frame list */ + DPAA2_SET_FLE_FIN(fle); +} + +static inline int dpdmai_dev_set_fd_us(struct qbman_fd *fd, + struct rte_qdma_job *job, + struct rte_qdma_rbp *rbp, + uint16_t vq_id) +{ + struct rte_qdma_job **ppjob; + size_t iova; + int ret = 0; + + if (job->src & QDMA_RBP_UPPER_ADDRESS_MASK) + iova = (size_t)job->dest; + else + iova = (size_t)job->src; + + /* Set the metadata */ + job->vq_id = vq_id; + ppjob = (struct rte_qdma_job **)DPAA2_IOVA_TO_VADDR(iova) - 1; + *ppjob = job; + + if ((rbp->drbp == 1) || (rbp->srbp == 1)) + ret = qdma_populate_fd_pci((phys_addr_t) job->src, + (phys_addr_t) job->dest, + job->len, fd, rbp); + else + ret = qdma_populate_fd_ddr((phys_addr_t) job->src, + (phys_addr_t) job->dest, + job->len, fd); + return ret; +} +static inline int dpdmai_dev_set_fd_lf(struct qbman_fd *fd, + struct rte_qdma_job *job, + struct rte_qdma_rbp *rbp, + uint16_t vq_id) +{ + struct rte_qdma_job **ppjob; + struct qbman_fle *fle; + int ret = 0; + /* + * Get an FLE/SDD from FLE pool. + * Note: IO metadata is before the FLE and SDD memory. + */ + ret = rte_mempool_get(qdma_dev.fle_pool, (void **)(&ppjob)); + if (ret) { + DPAA2_QDMA_DP_DEBUG("Memory alloc failed for FLE"); + return ret; + } + + /* Set the metadata */ + job->vq_id = vq_id; + *ppjob = job; + + fle = (struct qbman_fle *)(ppjob + 1); + + DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle)); + DPAA2_SET_FD_COMPOUND_FMT(fd); + DPAA2_SET_FD_FRC(fd, QDMA_SER_CTX); + + /* Populate FLE */ + memset(fle, 0, QDMA_FLE_POOL_SIZE); + dpaa2_qdma_populate_fle(fle, rbp, job->src, job->dest, + job->len, job->flags); + + return 0; +} + +static inline uint16_t dpdmai_dev_get_job_us(const struct qbman_fd *fd, + struct rte_qdma_job **job) +{ + uint16_t vqid; + size_t iova; + struct rte_qdma_job **ppjob; + + if (fd->simple_pci.saddr_hi & (QDMA_RBP_UPPER_ADDRESS_MASK >> 32)) + iova = (size_t) (((uint64_t)fd->simple_pci.daddr_hi) << 32 + | (uint64_t)fd->simple_pci.daddr_lo); + else + iova = (size_t)(((uint64_t)fd->simple_pci.saddr_hi) << 32 + | (uint64_t)fd->simple_pci.saddr_lo); + + ppjob = (struct rte_qdma_job **)DPAA2_IOVA_TO_VADDR(iova) - 1; + *job = (struct rte_qdma_job *)*ppjob; + (*job)->status = (fd->simple_pci.acc_err << 8) | (fd->simple_pci.error); + vqid = (*job)->vq_id; + + return vqid; +} + +static inline uint16_t dpdmai_dev_get_job_lf(const struct qbman_fd *fd, + struct rte_qdma_job **job) +{ + struct rte_qdma_job **ppjob; + uint16_t vqid; + /* + * Fetch metadata from FLE. job and vq_id were set + * in metadata in the enqueue operation. + */ + ppjob = (struct rte_qdma_job **) + DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)); + ppjob -= 1; + + *job = (struct rte_qdma_job *)*ppjob; + (*job)->status = (DPAA2_GET_FD_ERR(fd) << 8) | + (DPAA2_GET_FD_FRC(fd) & 0xFF); + vqid = (*job)->vq_id; + + /* Free FLE to the pool */ + rte_mempool_put(qdma_dev.fle_pool, (void *)ppjob); + + return vqid; +} + static struct qdma_hw_queue * alloc_hw_queue(uint32_t lcore_id) { @@ -291,6 +552,13 @@ rte_qdma_configure(struct rte_qdma_config *qdma_config) } qdma_dev.fle_pool_count = qdma_config->fle_pool_count; + if (qdma_config->format == RTE_QDMA_ULTRASHORT_FORMAT) { + dpdmai_dev_get_job = dpdmai_dev_get_job_us; + dpdmai_dev_set_fd = dpdmai_dev_set_fd_us; + } else { + dpdmai_dev_get_job = dpdmai_dev_get_job_lf; + dpdmai_dev_set_fd = dpdmai_dev_set_fd_lf; + } return 0; } @@ -379,112 +647,6 @@ rte_qdma_vq_create_rbp(uint32_t lcore_id, uint32_t flags, return i; } -static void -dpaa2_qdma_populate_fle(struct qbman_fle *fle, - struct rte_qdma_rbp *rbp, - uint64_t src, uint64_t dest, - size_t len, uint32_t flags) -{ - struct qdma_sdd *sdd; - - sdd = (struct qdma_sdd *)((uint8_t *)(fle) + - (DPAA2_QDMA_MAX_FLE * sizeof(struct qbman_fle))); - - /* first frame list to source descriptor */ - DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sdd)); - DPAA2_SET_FLE_LEN(fle, (2 * (sizeof(struct qdma_sdd)))); - - /* source and destination descriptor */ - if (rbp && rbp->enable) { - /* source */ - sdd->read_cmd.portid = rbp->sportid; - sdd->rbpcmd_simple.pfid = rbp->spfid; - sdd->rbpcmd_simple.vfid = rbp->svfid; - - if (rbp->srbp) { - sdd->read_cmd.rbp = rbp->srbp; - sdd->read_cmd.rdtype = DPAA2_RBP_MEM_RW; - } else { - sdd->read_cmd.rdtype = dpaa2_coherent_no_alloc_cache; - } - sdd++; - /* destination */ - sdd->write_cmd.portid = rbp->dportid; - sdd->rbpcmd_simple.pfid = rbp->dpfid; - sdd->rbpcmd_simple.vfid = rbp->dvfid; - - if (rbp->drbp) { - sdd->write_cmd.rbp = rbp->drbp; - sdd->write_cmd.wrttype = DPAA2_RBP_MEM_RW; - } else { - sdd->write_cmd.wrttype = dpaa2_coherent_alloc_cache; - } - - } else { - sdd->read_cmd.rdtype = dpaa2_coherent_no_alloc_cache; - sdd++; - sdd->write_cmd.wrttype = dpaa2_coherent_alloc_cache; - } - fle++; - /* source frame list to source buffer */ - if (flags & RTE_QDMA_JOB_SRC_PHY) { - DPAA2_SET_FLE_ADDR(fle, src); - DPAA2_SET_FLE_BMT(fle); - } else { - DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(src)); - } - DPAA2_SET_FLE_LEN(fle, len); - - fle++; - /* destination frame list to destination buffer */ - if (flags & RTE_QDMA_JOB_DEST_PHY) { - DPAA2_SET_FLE_BMT(fle); - DPAA2_SET_FLE_ADDR(fle, dest); - } else { - DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(dest)); - } - DPAA2_SET_FLE_LEN(fle, len); - - /* Final bit: 1, for last frame list */ - DPAA2_SET_FLE_FIN(fle); -} - -static inline uint16_t dpdmai_dev_set_fd(struct qbman_fd *fd, - struct rte_qdma_job *job, - struct rte_qdma_rbp *rbp, - uint16_t vq_id) -{ - struct qdma_io_meta *io_meta; - struct qbman_fle *fle; - int ret = 0; - /* - * Get an FLE/SDD from FLE pool. - * Note: IO metadata is before the FLE and SDD memory. - */ - ret = rte_mempool_get(qdma_dev.fle_pool, (void **)(&io_meta)); - if (ret) { - DPAA2_QDMA_DP_DEBUG("Memory alloc failed for FLE"); - return ret; - } - - /* Set the metadata */ - io_meta->cnxt = (size_t)job; - io_meta->id = vq_id; - - fle = (struct qbman_fle *)(io_meta + 1); - - DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle)); - DPAA2_SET_FD_COMPOUND_FMT(fd); - DPAA2_SET_FD_FRC(fd, QDMA_SER_CTX); - - /* Populate FLE */ - memset(fle, 0, QDMA_FLE_POOL_SIZE); - dpaa2_qdma_populate_fle(fle, rbp, job->src, job->dest, - job->len, job->flags); - - return 0; -} - static int dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev, uint16_t txq_id, @@ -602,31 +764,6 @@ rte_qdma_vq_enqueue(uint16_t vq_id, return rte_qdma_vq_enqueue_multi(vq_id, &job, 1); } -static inline uint16_t dpdmai_dev_get_job(const struct qbman_fd *fd, - struct rte_qdma_job **job) -{ - struct qbman_fle *fle; - struct qdma_io_meta *io_meta; - uint16_t vqid; - /* - * Fetch metadata from FLE. job and vq_id were set - * in metadata in the enqueue operation. - */ - fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)); - io_meta = (struct qdma_io_meta *)(fle) - 1; - - *job = (struct rte_qdma_job *)(size_t)io_meta->cnxt; - (*job)->status = (DPAA2_GET_FD_ERR(fd) << 8) | - (DPAA2_GET_FD_FRC(fd) & 0xFF); - - vqid = io_meta->id; - - /* Free FLE to the pool */ - rte_mempool_put(qdma_dev.fle_pool, io_meta); - - return vqid; -} - /* Function to receive a QDMA job for a given device and queue*/ static int dpdmai_dev_dequeue_multijob_prefetch( diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h index f15dda694..1bce1a4d6 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h @@ -1,12 +1,12 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018 NXP + * Copyright 2018-2019 NXP */ #ifndef __DPAA2_QDMA_H__ #define __DPAA2_QDMA_H__ struct qdma_sdd; -struct qdma_io_meta; +struct rte_qdma_job; #define DPAA2_QDMA_MAX_FLE 3 #define DPAA2_QDMA_MAX_SDD 2 @@ -14,7 +14,7 @@ struct qdma_io_meta; #define DPAA2_DPDMAI_MAX_QUEUES 8 /** FLE pool size: 3 Frame list + 2 source/destination descriptor */ -#define QDMA_FLE_POOL_SIZE (sizeof(struct qdma_io_meta) + \ +#define QDMA_FLE_POOL_SIZE (sizeof(struct rte_qdma_job *) + \ sizeof(struct qbman_fle) * DPAA2_QDMA_MAX_FLE + \ sizeof(struct qdma_sdd) * DPAA2_QDMA_MAX_SDD) /** FLE pool cache size */ @@ -108,17 +108,6 @@ struct qdma_per_core_info { uint16_t num_hw_queues; }; -/** Metadata which is stored with each operation */ -struct qdma_io_meta { - /** - * Context which is stored in the FLE pool (just before the FLE). - * QDMA job is stored as a this context as a part of metadata. - */ - uint64_t cnxt; - /** VQ ID is stored as a part of metadata of the enqueue command */ - uint64_t id; -}; - /** Source/Destination Descriptor */ struct qdma_sdd { uint32_t rsv; diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h index a1f905035..4e1268cc5 100644 --- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h @@ -33,6 +33,12 @@ enum { RTE_QDMA_MODE_VIRTUAL }; +/** Determines the format of FD */ +enum { + RTE_QDMA_LONG_FORMAT, + RTE_QDMA_ULTRASHORT_FORMAT, +}; + /** * If user has configured a Virtual Queue mode, but for some particular VQ * user needs an exclusive H/W queue associated (for better performance @@ -62,6 +68,8 @@ struct rte_qdma_config { uint16_t max_vqs; /** mode of operation - physical(h/w) or virtual */ uint8_t mode; + /** FD format */ + uint8_t format; /** * User provides this as input to the driver as a size of the FLE pool. * FLE's (and corresponding source/destination descriptors) are @@ -143,6 +151,7 @@ struct rte_qdma_job { * lower 8bits fd error */ uint16_t status; + uint16_t vq_id; }; /** From patchwork Wed Nov 6 14:43:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 178720 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp723181ilf; Wed, 6 Nov 2019 07:00:24 -0800 (PST) X-Google-Smtp-Source: APXvYqxP5jHyrkVz/p6J9mmkUM+moNQkv/4yfKxm0NbxV0uz/DNp+ZfUXAn3Hjq0g0bScRetT1fE X-Received: by 2002:a17:906:5015:: with SMTP id s21mr34076798ejj.226.1573052424356; Wed, 06 Nov 2019 07:00:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573052424; cv=none; d=google.com; s=arc-20160816; b=gzFGeRXE8mOcWYxNhpw2Yco2GrfYov5517lrP/QIVftZxf2Kn5H7NrfH9azklIvmpx mj6P90VqAenVNUwlGmZV2slaq0twEOWvAhWIJaqqzFaPlKoi3VGu9S4jUdTlBZ3O+hXQ uw3sHoxoHQnzx6xsuPXOGq4JpdzWobcHlcM4rZONrHLquaP9+Kg48GNHyBruPunTDCtk 9mQWoagu67SRW/sIQyhyRGL5lmdxRjEQOYyISc+EOT7NbdqvfBaptUmIx2mrJExmKUjD doGH7guG2NDLXqqXpxTlaTgyTo8BfHST4yUKaQu1dhDLK7KrfQcew9Ob9NrzIXjO7VVT GxIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=DS+EIsdK7E8GQmilCcODaMk2RnQTU6R9pRJBcmhn98o=; b=uRdxM6BULKPcjocVIBchUGmqQhkKLr7PlTCxkZVx4sO3hJgK/+b8f7fvOYtI/VM3gF 42Em3sz89d2gcYAMzKWmGTkbN4QR9ic+aL9Im/dEhA2UDS0YU9VEojYI+9UwbuUAjHJb JMtjDooAtJW4igKJvnysoMFx2IFDw1TZUejrJv67n5TRRgbLyuR9aen7+3zOPoohBEyr iVlD0fcp/wTN/BwHsVHrNX3q5Iu0+88TAFf1q2hWYR+84Gcq4Q4RHOnmwsidYc5nBNhX /hmfrAM7auHqSSUQE76M8wr4C9ncJHgI6HT9CSTSCeSN3+H4YY5Z0qVX8/KC6EZxxKmO h6QA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id pg6si5772083ejb.138.2019.11.06.07.00.24; Wed, 06 Nov 2019 07:00:24 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E82F51C211; Wed, 6 Nov 2019 16:00:23 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 1F3B11C206 for ; Wed, 6 Nov 2019 16:00:21 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 908371A02BB; Wed, 6 Nov 2019 16:00:21 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 640151A004E; Wed, 6 Nov 2019 16:00:19 +0100 (CET) Received: from GDB1.ap.freescale.net (gdb1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 3A07E402A9; Wed, 6 Nov 2019 23:00:16 +0800 (SGT) From: Nipun Gupta To: dev@dpdk.org Cc: thomas@monjalon.net, hemant.agrawal@nxp.com, Nipun Gupta Date: Wed, 6 Nov 2019 20:13:57 +0530 Message-Id: <20191106144357.4899-2-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191106144357.4899-1-nipun.gupta@nxp.com> References: <20191106144357.4899-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 2/2] event/dpaa2: support ordered queue case type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ordered queue is supported on DPAA2. Enable this case. Signed-off-by: Nipun Gupta --- drivers/event/dpaa2/dpaa2_eventdev.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.17.1 diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 4ee2c460e..d71361666 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -477,7 +477,6 @@ dpaa2_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id, RTE_SET_USED(dev); RTE_SET_USED(queue_id); - RTE_SET_USED(queue_conf); queue_conf->nb_atomic_flows = DPAA2_EVENT_QUEUE_ATOMIC_FLOWS; queue_conf->schedule_type = RTE_SCHED_TYPE_PARALLEL; @@ -496,8 +495,9 @@ dpaa2_eventdev_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, switch (queue_conf->schedule_type) { case RTE_SCHED_TYPE_PARALLEL: case RTE_SCHED_TYPE_ATOMIC: - break; case RTE_SCHED_TYPE_ORDERED: + break; + default: DPAA2_EVENTDEV_ERR("Schedule type is not supported."); return -1; }