From patchwork Fri Apr 17 17:12:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C2B5C3815B for ; Fri, 17 Apr 2020 17:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D857C2078E for ; Fri, 17 Apr 2020 17:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729313AbgDQRQ5 (ORCPT ); Fri, 17 Apr 2020 13:16:57 -0400 Received: from mga12.intel.com ([192.55.52.136]:38157 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729089AbgDQRQ4 (ORCPT ); Fri, 17 Apr 2020 13:16:56 -0400 IronPort-SDR: mLI0tb0uACqUaPCJPgVkx389tgwMsApyY7uICDqN+HAAv+xwMvXD8vGilIyOIaGmfALjh3ytk9 L/leD+0+Z+SA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:53 -0700 IronPort-SDR: TBBgFW05u8dfsMAqY54bleg8GthqB5AXLjpZJqRJhgtAF1LUqP/Jbk/VwcLjcA1hg6AIYp5Izg zWku7J9OYa0g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383700" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:53 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 02/16] RDMA/irdma: Implement device initialization definitions Date: Fri, 17 Apr 2020 10:12:37 -0700 Message-Id: <20200417171251.1533371-3-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Implement device initialization routines, interrupt set-up, and allocate object bit-map tracking structures. Also, add device specific attributes and register definitions. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/hw.c | 2597 +++++++++++++++++++++++ drivers/infiniband/hw/irdma/i40iw_hw.c | 211 ++ drivers/infiniband/hw/irdma/i40iw_hw.h | 162 ++ drivers/infiniband/hw/irdma/icrdma_hw.c | 76 + drivers/infiniband/hw/irdma/icrdma_hw.h | 62 + 5 files changed, 3108 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/hw.c create mode 100644 drivers/infiniband/hw/irdma/i40iw_hw.c create mode 100644 drivers/infiniband/hw/irdma/i40iw_hw.h create mode 100644 drivers/infiniband/hw/irdma/icrdma_hw.c create mode 100644 drivers/infiniband/hw/irdma/icrdma_hw.h diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c new file mode 100644 index 000000000000..294ee3c2b0c4 --- /dev/null +++ b/drivers/infiniband/hw/irdma/hw.c @@ -0,0 +1,2597 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include "main.h" + +static struct irdma_rsrc_limits rsrc_limits_table[] = { + [0] = { + .qplimit = 4096, + }, + [1] = { + .qplimit = 128, + }, + [2] = { + .qplimit = 1024, + }, + [3] = { + .qplimit = 2048, + }, + [4] = { + .qplimit = 16384, + }, + [5] = { + .qplimit = 65536, + }, +}; + +/* types of hmc objects */ +static enum irdma_hmc_rsrc_type iw_hmc_obj_types[] = { + IRDMA_HMC_IW_QP, + IRDMA_HMC_IW_CQ, + IRDMA_HMC_IW_HTE, + IRDMA_HMC_IW_ARP, + IRDMA_HMC_IW_APBVT_ENTRY, + IRDMA_HMC_IW_MR, + IRDMA_HMC_IW_XF, + IRDMA_HMC_IW_XFFL, + IRDMA_HMC_IW_Q1, + IRDMA_HMC_IW_Q1FL, + IRDMA_HMC_IW_TIMER, + IRDMA_HMC_IW_FSIMC, + IRDMA_HMC_IW_FSIAV, + IRDMA_HMC_IW_RRF, + IRDMA_HMC_IW_RRFFL, + IRDMA_HMC_IW_HDR, + IRDMA_HMC_IW_MD, + IRDMA_HMC_IW_OOISC, + IRDMA_HMC_IW_OOISCFFL, +}; + +/** + * irdma_iwarp_ce_handler - handle iwarp completions + * @iwcq: iwarp cq receiving event + */ +static void irdma_iwarp_ce_handler(struct irdma_sc_cq *iwcq) +{ + struct irdma_cq *cq = iwcq->back_cq; + + if (cq->ibcq.comp_handler) + cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); +} + +/** + * irdma_puda_ce_handler - handle puda completion events + * @rf: RDMA PCI function + * @cq: puda completion q for event + */ +static void irdma_puda_ce_handler(struct irdma_pci_f *rf, + struct irdma_sc_cq *cq) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + enum irdma_status_code status; + u32 compl_error; + + do { + status = irdma_puda_poll_cmpl(dev, cq, &compl_error); + if (status == IRDMA_ERR_Q_EMPTY) + break; + if (status) { + dev_dbg(rfdev_to_dev(dev), "ERR: puda status = %d\n", + status); + break; + } + if (compl_error) { + dev_dbg(rfdev_to_dev(dev), + "ERR: puda compl_err =0x%x\n", compl_error); + break; + } + } while (1); + + dev->ccq_ops->ccq_arm(cq); +} + +/** + * irdma_process_ceq - handle ceq for completions + * @rf: RDMA PCI function + * @ceq: ceq having cq for completion + */ +static void irdma_process_ceq(struct irdma_pci_f *rf, struct irdma_ceq *ceq) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_sc_ceq *sc_ceq; + struct irdma_sc_cq *cq; + + sc_ceq = &ceq->sc_ceq; + do { + cq = dev->ceq_ops->process_ceq(dev, sc_ceq); + if (!cq) + break; + + if (cq->cq_type == IRDMA_CQ_TYPE_CQP) + queue_work(rf->cqp_cmpl_wq, &rf->cqp_cmpl_work); + else if (cq->cq_type == IRDMA_CQ_TYPE_IWARP) + irdma_iwarp_ce_handler(cq); + else if (cq->cq_type == IRDMA_CQ_TYPE_ILQ || + cq->cq_type == IRDMA_CQ_TYPE_IEQ) + irdma_puda_ce_handler(rf, cq); + } while (1); +} + +/** + * irdma_process_aeq - handle aeq events + * @rf: RDMA PCI function + */ +static void irdma_process_aeq(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_aeq *aeq = &rf->aeq; + struct irdma_sc_aeq *sc_aeq = &aeq->sc_aeq; + struct irdma_aeqe_info aeinfo; + struct irdma_aeqe_info *info = &aeinfo; + int ret; + struct irdma_qp *iwqp = NULL; + struct irdma_sc_cq *cq = NULL; + struct irdma_cq *iwcq = NULL; + struct irdma_sc_qp *qp = NULL; + struct irdma_qp_host_ctx_info *ctx_info = NULL; + unsigned long flags; + + u32 aeqcnt = 0; + + if (!sc_aeq->size) + return; + + do { + memset(info, 0, sizeof(*info)); + ret = dev->aeq_ops->get_next_aeqe(sc_aeq, info); + if (ret) + break; + + aeqcnt++; + dev_dbg(rfdev_to_dev(dev), + "AEQ: ae_id = 0x%x bool qp=%d qp_id = %d\n", + info->ae_id, info->qp, info->qp_cq_id); + if (info->qp) { + spin_lock_irqsave(&rf->qptable_lock, flags); + iwqp = rf->qp_table[info->qp_cq_id]; + if (!iwqp) { + spin_unlock_irqrestore(&rf->qptable_lock, + flags); + if (info->ae_id == IRDMA_AE_QP_SUSPEND_COMPLETE) { + struct irdma_device *iwdev; + + iwdev = irdma_get_device(rf->netdev); + if (iwdev) { + atomic_dec(&iwdev->vsi.qp_suspend_reqs); + wake_up(&iwdev->suspend_wq); + irdma_put_device(iwdev); + } + continue; + } + dev_dbg(rfdev_to_dev(dev), + "AEQ: qp_id %d is already freed\n", + info->qp_cq_id); + continue; + } + irdma_add_ref(&iwqp->ibqp); + spin_unlock_irqrestore(&rf->qptable_lock, flags); + qp = &iwqp->sc_qp; + spin_lock_irqsave(&iwqp->lock, flags); + iwqp->hw_tcp_state = info->tcp_state; + iwqp->hw_iwarp_state = info->iwarp_state; + iwqp->last_aeq = info->ae_id; + spin_unlock_irqrestore(&iwqp->lock, flags); + ctx_info = &iwqp->ctx_info; + if (rdma_protocol_roce(&iwqp->iwdev->ibdev, 1)) + ctx_info->roce_info->err_rq_idx_valid = true; + else + ctx_info->iwarp_info->err_rq_idx_valid = true; + } else { + if (info->ae_id != IRDMA_AE_CQ_OPERATION_ERROR) + continue; + } + + switch (info->ae_id) { + struct irdma_cm_node *cm_node; + case IRDMA_AE_LLP_CONNECTION_ESTABLISHED: + cm_node = iwqp->cm_node; + if (cm_node->accept_pend) { + atomic_dec(&cm_node->listener->pend_accepts_cnt); + cm_node->accept_pend = 0; + } + iwqp->rts_ae_rcvd = 1; + wake_up_interruptible(&iwqp->waitq); + break; + case IRDMA_AE_LLP_FIN_RECEIVED: + case IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE: + if (qp->term_flags) + break; + if (atomic_inc_return(&iwqp->close_timer_started) == 1) { + iwqp->hw_tcp_state = IRDMA_TCP_STATE_CLOSE_WAIT; + if (iwqp->hw_tcp_state == IRDMA_TCP_STATE_CLOSE_WAIT && + iwqp->ibqp_state == IB_QPS_RTS) { + irdma_next_iw_state(iwqp, + IRDMA_QP_STATE_CLOSING, + 0, 0, 0); + irdma_cm_disconn(iwqp); + } + iwqp->cm_id->add_ref(iwqp->cm_id); + irdma_schedule_cm_timer(iwqp->cm_node, + (struct irdma_puda_buf *)iwqp, + IRDMA_TIMER_TYPE_CLOSE, + 1, 0); + } + break; + case IRDMA_AE_LLP_CLOSE_COMPLETE: + if (qp->term_flags) + irdma_terminate_done(qp, 0); + else + irdma_cm_disconn(iwqp); + break; + case IRDMA_AE_BAD_CLOSE: + /* fall through */ + case IRDMA_AE_RESET_SENT: + irdma_next_iw_state(iwqp, IRDMA_QP_STATE_ERROR, 1, 0, + 0); + irdma_cm_disconn(iwqp); + break; + case IRDMA_AE_LLP_CONNECTION_RESET: + if (atomic_read(&iwqp->close_timer_started)) + break; + irdma_cm_disconn(iwqp); + break; + case IRDMA_AE_QP_SUSPEND_COMPLETE: + atomic_dec(&iwqp->sc_qp.vsi->qp_suspend_reqs); + wake_up(&iwqp->iwdev->suspend_wq); + break; + case IRDMA_AE_TERMINATE_SENT: + irdma_terminate_send_fin(qp); + break; + case IRDMA_AE_LLP_TERMINATE_RECEIVED: + irdma_terminate_received(qp, info); + break; + case IRDMA_AE_CQ_OPERATION_ERROR: + dev_err(rfdev_to_dev(dev), + "Processing an iWARP related AE for CQ misc = 0x%04X\n", + info->ae_id); + cq = (struct irdma_sc_cq *)(unsigned long) + info->compl_ctx; + + iwcq = cq->back_cq; + + if (iwcq->ibcq.event_handler) { + struct ib_event ibevent; + + ibevent.device = iwcq->ibcq.device; + ibevent.event = IB_EVENT_CQ_ERR; + ibevent.element.cq = &iwcq->ibcq; + iwcq->ibcq.event_handler(&ibevent, + iwcq->ibcq.cq_context); + } + break; + case IRDMA_AE_LLP_DOUBT_REACHABILITY: + case IRDMA_AE_RESOURCE_EXHAUSTION: + break; + case IRDMA_AE_PRIV_OPERATION_DENIED: + case IRDMA_AE_STAG_ZERO_INVALID: + case IRDMA_AE_IB_RREQ_AND_Q1_FULL: + case IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION: + case IRDMA_AE_DDP_UBE_INVALID_MO: + case IRDMA_AE_DDP_UBE_INVALID_QN: + case IRDMA_AE_DDP_NO_L_BIT: + case IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION: + case IRDMA_AE_RDMAP_ROE_UNEXPECTED_OPCODE: + case IRDMA_AE_ROE_INVALID_RDMA_READ_REQUEST: + case IRDMA_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP: + case IRDMA_AE_INVALID_ARP_ENTRY: + case IRDMA_AE_INVALID_TCP_OPTION_RCVD: + case IRDMA_AE_STALE_ARP_ENTRY: + case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: + case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: + case IRDMA_AE_LLP_SYN_RECEIVED: + case IRDMA_AE_LLP_TOO_MANY_RETRIES: + case IRDMA_AE_LCE_QP_CATASTROPHIC: + case IRDMA_AE_LCE_FUNCTION_CATASTROPHIC: + case IRDMA_AE_LCE_CQ_CATASTROPHIC: + case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: + if (rdma_protocol_roce(&iwqp->iwdev->ibdev, 1)) + ctx_info->roce_info->err_rq_idx_valid = false; + else + ctx_info->iwarp_info->err_rq_idx_valid = false; + /* fall through */ + default: + dev_err(rfdev_to_dev(dev), + "abnormal ae_id = 0x%x bool qp=%d qp_id = %d\n", + info->ae_id, info->qp, info->qp_cq_id); + if (rdma_protocol_roce(&iwqp->iwdev->ibdev, 1)) { + if (!info->sq && ctx_info->roce_info->err_rq_idx_valid) { + ctx_info->roce_info->err_rq_idx = info->wqe_idx; + ret = dev->iw_priv_qp_ops->qp_setctx_roce(&iwqp->sc_qp, + iwqp->host_ctx.va, + ctx_info); + } + irdma_cm_disconn(iwqp); + break; + } + if (!info->sq && ctx_info->iwarp_info->err_rq_idx_valid) { + ctx_info->iwarp_info->err_rq_idx = info->wqe_idx; + ctx_info->tcp_info_valid = false; + ctx_info->iwarp_info_valid = false; + ret = dev->iw_priv_qp_ops->qp_setctx(&iwqp->sc_qp, + iwqp->host_ctx.va, + ctx_info); + } + if (iwqp->hw_iwarp_state != IRDMA_QP_STATE_RTS && + iwqp->hw_iwarp_state != IRDMA_QP_STATE_TERMINATE) { + irdma_next_iw_state(iwqp, IRDMA_QP_STATE_ERROR, 1, 0, 0); + irdma_cm_disconn(iwqp); + } else { + irdma_terminate_connection(qp, info); + } + break; + } + if (info->qp) + irdma_rem_ref(&iwqp->ibqp); + } while (1); + + if (aeqcnt) + dev->aeq_ops->repost_aeq_entries(dev, aeqcnt); +} + +/** + * irdma_enable_intr - set up device interrupts + * @dev: hardware control device structure + * @msix_id: id of the interrupt to be enabled + */ +static void irdma_ena_intr(struct irdma_sc_dev *dev, u32 msix_id) +{ + dev->irq_ops->irdma_en_irq(dev, msix_id); +} + +/** + * irdma_dpc - tasklet for aeq and ceq 0 + * @data: RDMA PCI function + */ +static void irdma_dpc(unsigned long data) +{ + struct irdma_pci_f *rf = (struct irdma_pci_f *)data; + + if (rf->msix_shared) + irdma_process_ceq(rf, rf->ceqlist); + irdma_process_aeq(rf); + irdma_ena_intr(&rf->sc_dev, rf->iw_msixtbl[0].idx); +} + +/** + * irdma_ceq_dpc - dpc handler for CEQ + * @data: data points to CEQ + */ +static void irdma_ceq_dpc(unsigned long data) +{ + struct irdma_ceq *iwceq = (struct irdma_ceq *)data; + struct irdma_pci_f *rf = iwceq->rf; + + irdma_process_ceq(rf, iwceq); + irdma_ena_intr(&rf->sc_dev, iwceq->msix_idx); +} + +/** + * irdma_save_msix_info - copy msix vector information to iwarp device + * @rf: RDMA PCI function + * + * Allocate iwdev msix table and copy the ldev msix info to the table + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_save_msix_info(struct irdma_pci_f *rf) +{ + struct irdma_priv_ldev *ldev = &rf->ldev; + struct irdma_qvlist_info *iw_qvlist; + struct irdma_qv_info *iw_qvinfo; + struct msix_entry *pmsix; + u32 ceq_idx; + u32 i; + u32 size; + + if (!ldev->msix_count) { + pr_err("No MSI-X vectors for RDMA\n"); + return IRDMA_ERR_CFG; + } + + rf->msix_count = ldev->msix_count; + size = sizeof(struct irdma_msix_vector) * rf->msix_count; + size += sizeof(struct irdma_qvlist_info); + size += sizeof(struct irdma_qv_info) * rf->msix_count - 1; + rf->iw_msixtbl = kzalloc(size, GFP_KERNEL); + if (!rf->iw_msixtbl) + return IRDMA_ERR_NO_MEMORY; + + rf->iw_qvlist = (struct irdma_qvlist_info *) + (&rf->iw_msixtbl[rf->msix_count]); + iw_qvlist = rf->iw_qvlist; + iw_qvinfo = iw_qvlist->qv_info; + iw_qvlist->num_vectors = rf->msix_count; + if (rf->msix_count <= num_online_cpus()) + rf->msix_shared = true; + + for (i = 0, ceq_idx = 0, pmsix = ldev->msix_entries; i < rf->msix_count; + i++, iw_qvinfo++, pmsix++) { + rf->iw_msixtbl[i].idx = pmsix->entry; + rf->iw_msixtbl[i].irq = pmsix->vector; + rf->iw_msixtbl[i].cpu_affinity = ceq_idx; + if (!i) { + iw_qvinfo->aeq_idx = 0; + if (rf->msix_shared) + iw_qvinfo->ceq_idx = ceq_idx++; + else + iw_qvinfo->ceq_idx = IRDMA_Q_INVALID_IDX; + } else { + iw_qvinfo->aeq_idx = IRDMA_Q_INVALID_IDX; + iw_qvinfo->ceq_idx = ceq_idx++; + } + iw_qvinfo->itr_idx = 3; + iw_qvinfo->v_idx = rf->iw_msixtbl[i].idx; + } + + return 0; +} + +/** + * irdma_irq_handler - interrupt handler for aeq and ceq0 + * @irq: Interrupt request number + * @data: RDMA PCI function + */ +static irqreturn_t irdma_irq_handler(int irq, void *data) +{ + struct irdma_pci_f *rf = data; + + tasklet_schedule(&rf->dpc_tasklet); + + return IRQ_HANDLED; +} + +/** + * irdma_ceq_handler - interrupt handler for ceq + * @irq: interrupt request number + * @data: ceq pointer + */ +static irqreturn_t irdma_ceq_handler(int irq, void *data) +{ + struct irdma_ceq *iwceq = data; + + if (iwceq->irq != irq) + dev_err(rfdev_to_dev(&iwceq->rf->sc_dev), + "expected irq = %d received irq = %d\n", iwceq->irq, + irq); + tasklet_schedule(&iwceq->dpc_tasklet); + + return IRQ_HANDLED; +} + +/** + * irdma_destroy_irq - destroy device interrupts + * @rf: RDMA PCI function + * @msix_vec: msix vector to disable irq + * @dev_id: parameter to pass to free_irq (used during irq setup) + * + * The function is called when destroying aeq/ceq + */ +static void irdma_destroy_irq(struct irdma_pci_f *rf, + struct irdma_msix_vector *msix_vec, void *dev_id) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + + dev->irq_ops->irdma_dis_irq(dev, msix_vec->idx); + irq_set_affinity_hint(msix_vec->irq, NULL); + free_irq(msix_vec->irq, dev_id); +} + +/** + * irdma_destroy_cqp - destroy control qp + * @rf: RDMA PCI function + * @free_hwcqp: 1 if hw cqp should be freed + * + * Issue destroy cqp request and + * free the resources associated with the cqp + */ +static void irdma_destroy_cqp(struct irdma_pci_f *rf, bool free_hwcqp) +{ + enum irdma_status_code status = 0; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_cqp *cqp = &rf->cqp; + + if (rf->cqp_cmpl_wq) + destroy_workqueue(rf->cqp_cmpl_wq); + + if (free_hwcqp) + status = dev->cqp_ops->cqp_destroy(dev->cqp); + if (status) + dev_dbg(rfdev_to_dev(dev), "ERR: Destroy CQP failed %d\n", + status); + + irdma_cleanup_pending_cqp_op(rf); + dma_free_coherent(hw_to_dev(dev->hw), cqp->sq.size, cqp->sq.va, + cqp->sq.pa); + cqp->sq.va = NULL; + kfree(cqp->scratch_array); + cqp->scratch_array = NULL; + kfree(cqp->cqp_requests); + cqp->cqp_requests = NULL; +} + +/** + * irdma_destroy_aeq - destroy aeq + * @rf: RDMA PCI function + * + * Issue a destroy aeq request and + * free the resources associated with the aeq + * The function is called during driver unload + */ +static void irdma_destroy_aeq(struct irdma_pci_f *rf) +{ + enum irdma_status_code status = IRDMA_ERR_NOT_READY; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_aeq *aeq = &rf->aeq; + + if (!rf->msix_shared) + irdma_destroy_irq(rf, rf->iw_msixtbl, rf); + if (rf->reset) + goto exit; + + if (!dev->aeq_ops->aeq_destroy(&aeq->sc_aeq, 0, 1)) + status = dev->aeq_ops->aeq_destroy_done(&aeq->sc_aeq); + if (status) + dev_dbg(rfdev_to_dev(dev), "ERR: Destroy AEQ failed %d\n", + status); + +exit: + dma_free_coherent(hw_to_dev(dev->hw), aeq->mem.size, aeq->mem.va, + aeq->mem.pa); + aeq->mem.va = NULL; +} + +/** + * irdma_destroy_ceq - destroy ceq + * @rf: RDMA PCI function + * @iwceq: ceq to be destroyed + * + * Issue a destroy ceq request and + * free the resources associated with the ceq + */ +static void irdma_destroy_ceq(struct irdma_pci_f *rf, struct irdma_ceq *iwceq) +{ + enum irdma_status_code status; + struct irdma_sc_dev *dev = &rf->sc_dev; + + if (rf->reset) + goto exit; + + status = dev->ceq_ops->ceq_destroy(&iwceq->sc_ceq, 0, 1); + if (status) { + dev_dbg(rfdev_to_dev(dev), + "ERR: CEQ destroy command failed %d\n", status); + goto exit; + } + + status = dev->ceq_ops->cceq_destroy_done(&iwceq->sc_ceq); + if (status) + dev_dbg(rfdev_to_dev(dev), + "ERR: CEQ destroy completion failed %d\n", status); +exit: + dma_free_coherent(hw_to_dev(dev->hw), iwceq->mem.size, iwceq->mem.va, + iwceq->mem.pa); + iwceq->mem.va = NULL; +} + +/** + * irdma_del_ceq_0 - destroy ceq 0 + * @rf: RDMA PCI function + * + * Disable the ceq 0 interrupt and destroy the ceq 0 + */ +static void irdma_del_ceq_0(struct irdma_pci_f *rf) +{ + struct irdma_ceq *iwceq = rf->ceqlist; + struct irdma_msix_vector *msix_vec; + + if (rf->msix_shared) { + msix_vec = &rf->iw_msixtbl[0]; + irdma_destroy_irq(rf, msix_vec, rf); + } else { + msix_vec = &rf->iw_msixtbl[1]; + irdma_destroy_irq(rf, msix_vec, iwceq); + } + irdma_destroy_ceq(rf, iwceq); + rf->sc_dev.ceq_valid = false; + rf->ceqs_count = 0; +} + +/** + * irdma_del_ceqs - destroy all ceq's except CEQ 0 + * @rf: RDMA PCI function + * + * Go through all of the device ceq's, except 0, and for each + * ceq disable the ceq interrupt and destroy the ceq + */ +static void irdma_del_ceqs(struct irdma_pci_f *rf) +{ + struct irdma_ceq *iwceq = &rf->ceqlist[1]; + struct irdma_msix_vector *msix_vec; + u32 i = 0; + + if (rf->msix_shared) + msix_vec = &rf->iw_msixtbl[1]; + else + msix_vec = &rf->iw_msixtbl[2]; + + for (i = 1; i < rf->ceqs_count; i++, msix_vec++, iwceq++) { + irdma_destroy_irq(rf, msix_vec, iwceq); + irdma_cqp_ceq_cmd(&rf->sc_dev, &iwceq->sc_ceq, + IRDMA_OP_CEQ_DESTROY); + dma_free_coherent(hw_to_dev(rf->sc_dev.hw), iwceq->mem.size, + iwceq->mem.va, iwceq->mem.pa); + iwceq->mem.va = NULL; + } + rf->ceqs_count = 1; +} + +/** + * irdma_destroy_ccq - destroy control cq + * @rf: RDMA PCI function + * + * Issue destroy ccq request and + * free the resources associated with the ccq + */ +static void irdma_destroy_ccq(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_ccq *ccq = &rf->ccq; + enum irdma_status_code status = 0; + + if (!rf->reset) + status = dev->ccq_ops->ccq_destroy(dev->ccq, 0, true); + if (status) + dev_dbg(rfdev_to_dev(dev), "ERR: CCQ destroy failed %d\n", + status); + dma_free_coherent(hw_to_dev(dev->hw), ccq->mem_cq.size, + ccq->mem_cq.va, ccq->mem_cq.pa); + ccq->mem_cq.va = NULL; +} + +/** + * irdma_close_hmc_objects_type - delete hmc objects of a given type + * @dev: iwarp device + * @obj_type: the hmc object type to be deleted + * @hmc_info: host memory info struct + * @privileged: permission to close HMC objects + * @reset: true if called before reset + */ +static void irdma_close_hmc_objects_type(struct irdma_sc_dev *dev, + enum irdma_hmc_rsrc_type obj_type, + struct irdma_hmc_info *hmc_info, + bool privileged, bool reset) +{ + struct irdma_hmc_del_obj_info info = {}; + + info.hmc_info = hmc_info; + info.rsrc_type = obj_type; + info.count = hmc_info->hmc_obj[obj_type].cnt; + info.privileged = privileged; + if (dev->hmc_ops->del_hmc_object(dev, &info, reset)) + dev_dbg(rfdev_to_dev(dev), + "ERR: del HMC obj of type %d failed\n", obj_type); +} + +/** + * irdma_del_hmc_objects - remove all device hmc objects + * @dev: iwarp device + * @hmc_info: hmc_info to free + * @privileged: permission to delete HMC objects + * @reset: true if called before reset + * @vers: hardware version + */ +static void irdma_del_hmc_objects(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info, bool privileged, + bool reset, enum irdma_vers vers) +{ + unsigned int i; + + for (i = 0; i < IW_HMC_OBJ_TYPE_NUM; i++) { + if (dev->hmc_info->hmc_obj[iw_hmc_obj_types[i]].cnt) + irdma_close_hmc_objects_type(dev, iw_hmc_obj_types[i], + hmc_info, privileged, reset); + if (vers == IRDMA_GEN_1 && i == IRDMA_HMC_IW_TIMER) + break; + } +} + +/** + * irdma_create_hmc_obj_type - create hmc object of a given type + * @dev: hardware control device structure + * @info: information for the hmc object to create + */ +static enum irdma_status_code +irdma_create_hmc_obj_type(struct irdma_sc_dev *dev, + struct irdma_hmc_create_obj_info *info) +{ + return dev->hmc_ops->create_hmc_object(dev, info); +} + +/** + * irdma_create_hmc_objs - create all hmc objects for the device + * @rf: RDMA PCI function + * @privileged: permission to create HMC objects + * @vers: HW version + * + * Create the device hmc objects and allocate hmc pages + * Return 0 if successful, otherwise clean up and return error + */ +static enum irdma_status_code +irdma_create_hmc_objs(struct irdma_pci_f *rf, bool privileged, enum irdma_vers vers) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_hmc_create_obj_info info = {}; + enum irdma_status_code status = 0; + int i; + + info.hmc_info = dev->hmc_info; + info.privileged = privileged; + info.entry_type = rf->sd_type; + + for (i = 0; i < IW_HMC_OBJ_TYPE_NUM; i++) { + if (dev->hmc_info->hmc_obj[iw_hmc_obj_types[i]].cnt) { + info.rsrc_type = iw_hmc_obj_types[i]; + info.count = dev->hmc_info->hmc_obj[info.rsrc_type].cnt; + info.add_sd_cnt = 0; + status = irdma_create_hmc_obj_type(dev, &info); + if (status) { + dev_dbg(rfdev_to_dev(dev), + "ERR: create obj type %d status = %d\n", + iw_hmc_obj_types[i], status); + break; + } + } + if (vers == IRDMA_GEN_1 && i == IRDMA_HMC_IW_TIMER) + break; + } + + if (!status) + return dev->hmc_ops->static_hmc_pages_allocated(dev->cqp, 0, + dev->hmc_fn_id, + true, true); + + while (i) { + i--; + /* destroy the hmc objects of a given type */ + irdma_close_hmc_objects_type(dev, iw_hmc_obj_types[i], + dev->hmc_info, privileged, false); + } + + return status; +} + +/** + * irdma_obj_aligned_mem - get aligned memory from device allocated memory + * @rf: RDMA PCI function + * @memptr: points to the memory addresses + * @size: size of memory needed + * @mask: mask for the aligned memory + * + * Get aligned memory of the requested size and + * update the memptr to point to the new aligned memory + * Return 0 if successful, otherwise return no memory error + */ +static enum irdma_status_code +irdma_obj_aligned_mem(struct irdma_pci_f *rf, struct irdma_dma_mem *memptr, + u32 size, u32 mask) +{ + unsigned long va, newva; + unsigned long extra; + + va = (unsigned long)rf->obj_next.va; + newva = va; + if (mask) + newva = ALIGN(va, (unsigned long)mask + 1ULL); + extra = newva - va; + memptr->va = (u8 *)va + extra; + memptr->pa = rf->obj_next.pa + extra; + memptr->size = size; + if ((memptr->va + size) > (rf->obj_mem.va + rf->obj_mem.size)) + return IRDMA_ERR_NO_MEMORY; + + rf->obj_next.va = memptr->va + size; + rf->obj_next.pa = memptr->pa + size; + + return 0; +} + +/** + * irdma_create_cqp - create control qp + * @rf: RDMA PCI function + * + * Return 0, if the cqp and all the resources associated with it + * are successfully created, otherwise return error + */ +static enum irdma_status_code irdma_create_cqp(struct irdma_pci_f *rf) +{ + enum irdma_status_code status; + u32 sqsize = IRDMA_CQP_SW_SQSIZE_2048; + struct irdma_dma_mem mem; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_cqp_init_info cqp_init_info = {}; + struct irdma_cqp *cqp = &rf->cqp; + u16 maj_err, min_err; + int i; + + cqp->cqp_requests = kcalloc(sqsize, sizeof(*cqp->cqp_requests), GFP_KERNEL); + if (!cqp->cqp_requests) + return IRDMA_ERR_NO_MEMORY; + + cqp->scratch_array = kcalloc(sqsize, sizeof(*cqp->scratch_array), GFP_KERNEL); + if (!cqp->scratch_array) { + kfree(cqp->cqp_requests); + return IRDMA_ERR_NO_MEMORY; + } + + dev->cqp = &cqp->sc_cqp; + dev->cqp->dev = dev; + cqp->sq.size = ALIGN(sizeof(struct irdma_cqp_sq_wqe) * sqsize, + IRDMA_CQP_ALIGNMENT); + cqp->sq.va = dma_alloc_coherent(hw_to_dev(dev->hw), cqp->sq.size, + &cqp->sq.pa, GFP_KERNEL); + if (!cqp->sq.va) { + kfree(cqp->scratch_array); + kfree(cqp->cqp_requests); + return IRDMA_ERR_NO_MEMORY; + } + + status = irdma_obj_aligned_mem(rf, &mem, sizeof(struct irdma_cqp_ctx), + IRDMA_HOST_CTX_ALIGNMENT_M); + if (status) + goto exit; + + dev->cqp->host_ctx_pa = mem.pa; + dev->cqp->host_ctx = mem.va; + /* populate the cqp init info */ + cqp_init_info.dev = dev; + cqp_init_info.sq_size = sqsize; + cqp_init_info.sq = cqp->sq.va; + cqp_init_info.sq_pa = cqp->sq.pa; + cqp_init_info.host_ctx_pa = mem.pa; + cqp_init_info.host_ctx = mem.va; + cqp_init_info.hmc_profile = rf->rsrc_profile; + cqp_init_info.ena_vf_count = rf->max_rdma_vfs; + cqp_init_info.scratch_array = cqp->scratch_array; + cqp_init_info.disable_packed = true; + cqp_init_info.protocol_used = rf->protocol_used; + status = dev->cqp_ops->cqp_init(dev->cqp, &cqp_init_info); + if (status) { + dev_dbg(rfdev_to_dev(dev), "ERR: cqp init status %d\n", + status); + goto exit; + } + + status = dev->cqp_ops->cqp_create(dev->cqp, &maj_err, &min_err); + if (status) { + dev_dbg(rfdev_to_dev(dev), + "ERR: cqp create failed - status %d maj_err %d min_err %d\n", + status, maj_err, min_err); + goto exit; + } + + spin_lock_init(&cqp->req_lock); + spin_lock_init(&cqp->compl_lock); + INIT_LIST_HEAD(&cqp->cqp_avail_reqs); + INIT_LIST_HEAD(&cqp->cqp_pending_reqs); + + /* init the waitqueue of the cqp_requests and add them to the list */ + for (i = 0; i < sqsize; i++) { + init_waitqueue_head(&cqp->cqp_requests[i].waitq); + list_add_tail(&cqp->cqp_requests[i].list, &cqp->cqp_avail_reqs); + } + init_waitqueue_head(&cqp->remove_wq); + return 0; + +exit: + irdma_destroy_cqp(rf, false); + + return status; +} + +/** + * irdma_create_ccq - create control cq + * @rf: RDMA PCI function + * + * Return 0, if the ccq and the resources associated with it + * are successfully created, otherwise return error + */ +static enum irdma_status_code irdma_create_ccq(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + enum irdma_status_code status; + struct irdma_ccq_init_info info = {}; + struct irdma_ccq *ccq = &rf->ccq; + + dev->ccq = &ccq->sc_cq; + dev->ccq->dev = dev; + info.dev = dev; + ccq->shadow_area.size = sizeof(struct irdma_cq_shadow_area); + ccq->mem_cq.size = ALIGN(sizeof(struct irdma_cqe) * IW_CCQ_SIZE, + IRDMA_CQ0_ALIGNMENT); + ccq->mem_cq.va = dma_alloc_coherent(hw_to_dev(dev->hw), + ccq->mem_cq.size, &ccq->mem_cq.pa, + GFP_KERNEL); + if (!ccq->mem_cq.va) + return IRDMA_ERR_NO_MEMORY; + + status = irdma_obj_aligned_mem(rf, &ccq->shadow_area, + ccq->shadow_area.size, + IRDMA_SHADOWAREA_M); + if (status) + goto exit; + + ccq->sc_cq.back_cq = ccq; + /* populate the ccq init info */ + info.cq_base = ccq->mem_cq.va; + info.cq_pa = ccq->mem_cq.pa; + info.num_elem = IW_CCQ_SIZE; + info.shadow_area = ccq->shadow_area.va; + info.shadow_area_pa = ccq->shadow_area.pa; + info.ceqe_mask = false; + info.ceq_id_valid = true; + info.shadow_read_threshold = 16; + info.vsi = &rf->default_vsi; + status = dev->ccq_ops->ccq_init(dev->ccq, &info); + if (!status) + status = dev->ccq_ops->ccq_create(dev->ccq, 0, true, true); +exit: + if (status) { + dma_free_coherent(hw_to_dev(dev->hw), ccq->mem_cq.size, + ccq->mem_cq.va, ccq->mem_cq.pa); + ccq->mem_cq.va = NULL; + } + + return status; +} + +/** + * irdma_alloc_set_mac - set up a mac address table entry + * @iwdev: irdma device + * + * Allocate a mac ip entry and add it to the hw table Return 0 + * if successful, otherwise return error + */ +static enum irdma_status_code irdma_alloc_set_mac(struct irdma_device *iwdev) +{ + enum irdma_status_code status; + + status = irdma_alloc_local_mac_entry(iwdev->rf, + &iwdev->mac_ip_table_idx); + if (!status) { + status = irdma_add_local_mac_entry(iwdev->rf, + (u8 *)iwdev->netdev->dev_addr, + (u8)iwdev->mac_ip_table_idx); + if (status) + irdma_del_local_mac_entry(iwdev->rf, + (u8)iwdev->mac_ip_table_idx); + } + return status; +} + +/** + * irdma_configure_ceq_vector - set up the msix interrupt vector for ceq + * @rf: RDMA PCI function + * @iwceq: ceq associated with the vector + * @ceq_id: the id number of the iwceq + * @msix_vec: interrupt vector information + * + * Allocate interrupt resources and enable irq handling + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code +irdma_cfg_ceq_vector(struct irdma_pci_f *rf, struct irdma_ceq *iwceq, + u32 ceq_id, struct irdma_msix_vector *msix_vec) +{ + int status; + + if (rf->msix_shared && !ceq_id) { + tasklet_init(&rf->dpc_tasklet, irdma_dpc, (unsigned long)rf); + status = request_irq(msix_vec->irq, irdma_irq_handler, 0, + "AEQCEQ", rf); + } else { + tasklet_init(&iwceq->dpc_tasklet, irdma_ceq_dpc, + (unsigned long)iwceq); + + status = request_irq(msix_vec->irq, irdma_ceq_handler, 0, "CEQ", + iwceq); + } + + cpumask_clear(&msix_vec->mask); + cpumask_set_cpu(msix_vec->cpu_affinity, &msix_vec->mask); + irq_set_affinity_hint(msix_vec->irq, &msix_vec->mask); + if (status) { + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: ceq irq config fail\n"); + return IRDMA_ERR_CFG; + } + + msix_vec->ceq_id = ceq_id; + rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, ceq_id, msix_vec->idx); + + return 0; +} + +/** + * irdma_configure_aeq_vector - set up the msix vector for aeq + * @rf: RDMA PCI function + * + * Allocate interrupt resources and enable irq handling + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_cfg_aeq_vector(struct irdma_pci_f *rf) +{ + struct irdma_msix_vector *msix_vec = rf->iw_msixtbl; + u32 ret = 0; + + if (!rf->msix_shared) { + tasklet_init(&rf->dpc_tasklet, irdma_dpc, (unsigned long)rf); + ret = request_irq(msix_vec->irq, irdma_irq_handler, 0, "irdma", + rf); + } + if (ret) { + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: aeq irq config fail\n"); + return IRDMA_ERR_CFG; + } + + rf->sc_dev.irq_ops->irdma_cfg_aeq(&rf->sc_dev, msix_vec->idx); + + return 0; +} + +/** + * irdma_create_ceq - create completion event queue + * @rf: RDMA PCI function + * @iwceq: pointer to the ceq resources to be created + * @ceq_id: the id number of the iwceq + * @vsi: SC vsi struct + * + * Return 0, if the ceq and the resources associated with it + * are successfully created, otherwise return error + */ +static enum irdma_status_code irdma_create_ceq(struct irdma_pci_f *rf, + struct irdma_ceq *iwceq, + u32 ceq_id, + struct irdma_sc_vsi *vsi) +{ + enum irdma_status_code status; + struct irdma_ceq_init_info info = {}; + struct irdma_sc_dev *dev = &rf->sc_dev; + u64 scratch; + + info.ceq_id = ceq_id; + iwceq->rf = rf; + iwceq->mem.size = ALIGN(sizeof(struct irdma_ceqe) * rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt, + IRDMA_CEQ_ALIGNMENT); + iwceq->mem.va = dma_alloc_coherent(hw_to_dev(dev->hw), + iwceq->mem.size, &iwceq->mem.pa, + GFP_KERNEL); + if (!iwceq->mem.va) + return IRDMA_ERR_NO_MEMORY; + + info.ceq_id = ceq_id; + info.ceqe_base = iwceq->mem.va; + info.ceqe_pa = iwceq->mem.pa; + info.elem_cnt = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt; + iwceq->sc_ceq.ceq_id = ceq_id; + info.dev = dev; + info.vsi = vsi; + scratch = (uintptr_t)&rf->cqp.sc_cqp; + status = dev->ceq_ops->ceq_init(&iwceq->sc_ceq, &info); + if (!status) { + if (dev->ceq_valid) + status = irdma_cqp_ceq_cmd(&rf->sc_dev, &iwceq->sc_ceq, + IRDMA_OP_CEQ_CREATE); + else + status = dev->ceq_ops->cceq_create(&iwceq->sc_ceq, + scratch); + } + + if (status) { + dma_free_coherent(hw_to_dev(dev->hw), iwceq->mem.size, + iwceq->mem.va, iwceq->mem.pa); + iwceq->mem.va = NULL; + } + + return status; +} + +/** + * irdma_setup_ceq_0 - create CEQ 0 and it's interrupt resource + * @rf: RDMA PCI function + * + * Allocate a list for all device completion event queues + * Create the ceq 0 and configure it's msix interrupt vector + * Return 0, if successfully set up, otherwise return error + */ +static enum irdma_status_code irdma_setup_ceq_0(struct irdma_pci_f *rf) +{ + u32 i; + struct irdma_ceq *iwceq; + struct irdma_msix_vector *msix_vec; + enum irdma_status_code status = 0; + u32 num_ceqs; + + num_ceqs = min(rf->msix_count, rf->sc_dev.hmc_fpm_misc.max_ceqs); + rf->ceqlist = kcalloc(num_ceqs, sizeof(*rf->ceqlist), GFP_KERNEL); + if (!rf->ceqlist) { + status = IRDMA_ERR_NO_MEMORY; + goto exit; + } + + i = rf->msix_shared ? 0 : 1; + iwceq = &rf->ceqlist[0]; + status = irdma_create_ceq(rf, iwceq, 0, &rf->default_vsi); + if (status) { + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: create ceq status = %d\n", status); + goto exit; + } + + msix_vec = &rf->iw_msixtbl[i]; + iwceq->irq = msix_vec->irq; + iwceq->msix_idx = msix_vec->idx; + status = irdma_cfg_ceq_vector(rf, iwceq, 0, msix_vec); + if (status) { + irdma_destroy_ceq(rf, iwceq); + goto exit; + } + + irdma_ena_intr(&rf->sc_dev, msix_vec->idx); + rf->ceqs_count++; + +exit: + if (status && !rf->ceqs_count) { + kfree(rf->ceqlist); + rf->ceqlist = NULL; + return status; + } + rf->sc_dev.ceq_valid = true; + + return 0; +} + +/** + * irdma_setup_ceqs - manage the device ceq's and their interrupt resources + * @rf: RDMA PCI function + * @vsi: VSI structure for this CEQ + * + * Allocate a list for all device completion event queues + * Create the ceq's and configure their msix interrupt vectors + * Return 0, if at least one ceq is successfully set up, otherwise return error + */ +static enum irdma_status_code irdma_setup_ceqs(struct irdma_pci_f *rf, + struct irdma_sc_vsi *vsi) +{ + u32 i; + u32 ceq_id; + struct irdma_ceq *iwceq; + struct irdma_msix_vector *msix_vec; + enum irdma_status_code status = 0; + u32 num_ceqs; + + num_ceqs = min(rf->msix_count, rf->sc_dev.hmc_fpm_misc.max_ceqs); + i = (rf->msix_shared) ? 1 : 2; + for (ceq_id = 1; i < num_ceqs; i++, ceq_id++) { + iwceq = &rf->ceqlist[ceq_id]; + status = irdma_create_ceq(rf, iwceq, ceq_id, vsi); + if (status) { + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: create ceq status = %d\n", status); + break; + } + msix_vec = &rf->iw_msixtbl[i]; + iwceq->irq = msix_vec->irq; + iwceq->msix_idx = msix_vec->idx; + status = irdma_cfg_ceq_vector(rf, iwceq, ceq_id, msix_vec); + if (status) { + irdma_destroy_ceq(rf, iwceq); + break; + } + irdma_ena_intr(&rf->sc_dev, msix_vec->idx); + rf->ceqs_count++; + } + + return status; +} + +/** + * irdma_create_aeq - create async event queue + * @rf: RDMA PCI function + * + * Return 0, if the aeq and the resources associated with it + * are successfully created, otherwise return error + */ +static enum irdma_status_code irdma_create_aeq(struct irdma_pci_f *rf) +{ + enum irdma_status_code status; + struct irdma_aeq_init_info info = {}; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_aeq *aeq = &rf->aeq; + struct irdma_hmc_info *hmc_info = rf->sc_dev.hmc_info; + u64 scratch = 0; + u32 aeq_size; + + aeq_size = 2 * hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt + + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt; + aeq->mem.size = ALIGN(sizeof(struct irdma_sc_aeqe) * aeq_size, + IRDMA_AEQ_ALIGNMENT); + aeq->mem.va = dma_alloc_coherent(hw_to_dev(dev->hw), aeq->mem.size, + &aeq->mem.pa, GFP_KERNEL); + if (!aeq->mem.va) + return IRDMA_ERR_NO_MEMORY; + + info.aeqe_base = aeq->mem.va; + info.aeq_elem_pa = aeq->mem.pa; + info.elem_cnt = aeq_size; + info.dev = dev; + status = dev->aeq_ops->aeq_init(&aeq->sc_aeq, &info); + if (status) + goto exit; + + status = dev->aeq_ops->aeq_create(&aeq->sc_aeq, scratch, 1); + if (!status) + status = dev->aeq_ops->aeq_create_done(&aeq->sc_aeq); +exit: + if (status) { + dma_free_coherent(hw_to_dev(dev->hw), aeq->mem.size, + aeq->mem.va, aeq->mem.pa); + aeq->mem.va = NULL; + } + + return status; +} + +/** + * irdma_setup_aeq - set up the device aeq + * @rf: RDMA PCI function + * + * Create the aeq and configure its msix interrupt vector + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_setup_aeq(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + enum irdma_status_code status; + + status = irdma_create_aeq(rf); + if (status) + return status; + + status = irdma_cfg_aeq_vector(rf); + if (status) { + irdma_destroy_aeq(rf); + return status; + } + + if (!rf->msix_shared) + irdma_ena_intr(dev, rf->iw_msixtbl[0].idx); + + return 0; +} + +/** + * irdma_initialize_ilq - create iwarp local queue for cm + * @iwdev: irdma device + * + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_initialize_ilq(struct irdma_device *iwdev) +{ + struct irdma_puda_rsrc_info info = {}; + enum irdma_status_code status; + + info.type = IRDMA_PUDA_RSRC_TYPE_ILQ; + info.cq_id = 1; + info.qp_id = 1; + info.count = 1; + info.pd_id = 1; + info.sq_size = min(iwdev->rf->max_qp / 2, (u32)32768); + info.rq_size = info.sq_size; + info.buf_size = 1024; + info.tx_buf_cnt = 2 * info.sq_size; + info.receive = irdma_receive_ilq; + info.xmit_complete = irdma_free_sqbuf; + status = irdma_puda_create_rsrc(&iwdev->vsi, &info); + if (status) + dev_dbg(rfdev_to_dev(&iwdev->rf->sc_dev), + "ERR: ilq create fail\n"); + + return status; +} + +/** + * irdma_initialize_ieq - create iwarp exception queue + * @iwdev: irdma device + * + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_initialize_ieq(struct irdma_device *iwdev) +{ + struct irdma_puda_rsrc_info info = {}; + enum irdma_status_code status; + + info.type = IRDMA_PUDA_RSRC_TYPE_IEQ; + info.cq_id = 2; + info.qp_id = iwdev->vsi.exception_lan_q; + info.count = 1; + info.pd_id = 2; + info.sq_size = min(iwdev->rf->max_qp / 2, (u32)32768); + info.rq_size = info.sq_size; + info.buf_size = iwdev->vsi.mtu + IRDMA_IPV4_PAD; + info.tx_buf_cnt = 4096; + status = irdma_puda_create_rsrc(&iwdev->vsi, &info); + if (status) + dev_dbg(rfdev_to_dev(&iwdev->rf->sc_dev), + "ERR: ieq create fail\n"); + + return status; +} + +/** + * irdma_reinitialize_ieq - destroy and re-create ieq + * @vsi: VSI structure + */ +void irdma_reinitialize_ieq(struct irdma_sc_vsi *vsi) +{ + struct irdma_device *iwdev = vsi->back_vsi; + struct irdma_pci_f *rf = iwdev->rf; + + irdma_puda_dele_rsrc(vsi, IRDMA_PUDA_RSRC_TYPE_IEQ, false); + if (irdma_initialize_ieq(iwdev)) { + iwdev->reset = true; + rf->gen_ops.request_reset(rf); + } +} + +/** + * irdma_hmc_setup - create hmc objects for the device + * @rf: RDMA PCI function + * + * Set up the device private memory space for the number and size of + * the hmc objects and create the objects + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_hmc_setup(struct irdma_pci_f *rf) +{ + enum irdma_status_code status; + u32 qpcnt; + + if (rf->rdma_ver == IRDMA_GEN_1) + qpcnt = rsrc_limits_table[rf->limits_sel].qplimit * 2; + else + qpcnt = rsrc_limits_table[rf->limits_sel].qplimit; + + rf->sd_type = IRDMA_SD_TYPE_DIRECT; + status = irdma_cfg_fpm_val(&rf->sc_dev, qpcnt); + if (status) + return status; + + status = irdma_create_hmc_objs(rf, true, rf->rdma_ver); + + return status; +} + +/** + * irdma_del_init_mem - deallocate memory resources + * @rf: RDMA PCI function + */ +static void irdma_del_init_mem(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + + kfree(dev->hmc_info->sd_table.sd_entry); + dev->hmc_info->sd_table.sd_entry = NULL; + kfree(rf->mem_rsrc); + rf->mem_rsrc = NULL; + dma_free_coherent(hw_to_dev(&rf->hw), rf->obj_mem.size, + rf->obj_mem.va, rf->obj_mem.pa); + rf->obj_mem.va = NULL; + if (rf->rdma_ver != IRDMA_GEN_1) { + kfree(rf->allocated_ws_nodes); + rf->allocated_ws_nodes = NULL; + } + kfree(rf->ceqlist); + rf->ceqlist = NULL; + kfree(rf->iw_msixtbl); + rf->iw_msixtbl = NULL; + kfree(rf->hmc_info_mem); + rf->hmc_info_mem = NULL; +} +/** + * irdma_initialize_dev - initialize device + * @rf: RDMA PCI function + * @ldev: lan device information + * + * Allocate memory for the hmc objects and initialize iwdev + * Return 0 if successful, otherwise clean up the resources + * and return error + */ +static enum irdma_status_code irdma_initialize_dev(struct irdma_pci_f *rf, + struct irdma_priv_ldev *ldev) +{ + enum irdma_status_code status; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_device_init_info info = {}; + struct irdma_dma_mem mem; + u32 size; + + size = sizeof(struct irdma_hmc_pble_rsrc) + + sizeof(struct irdma_hmc_info) + + (sizeof(struct irdma_hmc_obj_info) * IRDMA_HMC_IW_MAX); + + rf->hmc_info_mem = kzalloc(size, GFP_KERNEL); + if (!rf->hmc_info_mem) + return IRDMA_ERR_NO_MEMORY; + + rf->pble_rsrc = (struct irdma_hmc_pble_rsrc *)rf->hmc_info_mem; + dev->hmc_info = &rf->hw.hmc; + dev->hmc_info->hmc_obj = (struct irdma_hmc_obj_info *) + (rf->pble_rsrc + 1); + + status = irdma_obj_aligned_mem(rf, &mem, IRDMA_QUERY_FPM_BUF_SIZE, + IRDMA_FPM_QUERY_BUF_ALIGNMENT_M); + if (status) + goto error; + + info.fpm_query_buf_pa = mem.pa; + info.fpm_query_buf = mem.va; + info.init_hw = rf->gen_ops.init_hw; + + status = irdma_obj_aligned_mem(rf, &mem, IRDMA_COMMIT_FPM_BUF_SIZE, + IRDMA_FPM_COMMIT_BUF_ALIGNMENT_M); + if (status) + goto error; + + info.fpm_commit_buf_pa = mem.pa; + info.fpm_commit_buf = mem.va; + + info.bar0 = rf->hw.hw_addr; + info.hmc_fn_id = (u8)ldev->fn_num; + info.privileged = !ldev->ftype; + info.hw = &rf->hw; + info.vchnl_send = NULL; + status = irdma_sc_ctrl_init(rf->rdma_ver, &rf->sc_dev, &info); + if (status) + goto error; + + return status; +error: + kfree(rf->hmc_info_mem); + rf->hmc_info_mem = NULL; + + return status; +} + +/** + * irdma_rt_deinit_hw - clean up the irdma device resources + * @iwdev: irdma device + * + * remove the mac ip entry and ipv4/ipv6 addresses, destroy the + * device queues and free the pble and the hmc objects + */ +void irdma_rt_deinit_hw(struct irdma_device *iwdev) +{ + dev_dbg(rfdev_to_dev(&iwdev->rf->sc_dev), "INIT: state = %d\n", + iwdev->init_state); + + switch (iwdev->init_state) { + case IP_ADDR_REGISTERED: + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + irdma_del_local_mac_entry(iwdev->rf, + (u8)iwdev->mac_ip_table_idx); + /* fallthrough */ + case PBLE_CHUNK_MEM: + /* fallthrough */ + case CEQS_CREATED: + /* fallthrough */ + case IEQ_CREATED: + irdma_puda_dele_rsrc(&iwdev->vsi, IRDMA_PUDA_RSRC_TYPE_IEQ, + iwdev->reset); + /* fallthrough */ + case ILQ_CREATED: + if (iwdev->create_ilq) + irdma_puda_dele_rsrc(&iwdev->vsi, + IRDMA_PUDA_RSRC_TYPE_ILQ, + iwdev->reset); + break; + default: + dev_warn(rfdev_to_dev(&iwdev->rf->sc_dev), + "bad init_state = %d\n", iwdev->init_state); + break; + } + + irdma_cleanup_cm_core(&iwdev->cm_core); + if (iwdev->vsi.pestat) { + irdma_vsi_stats_free(&iwdev->vsi); + kfree(iwdev->vsi.pestat); + } + destroy_workqueue(iwdev->cleanup_wq); + list_del(&iwdev->list); +} + +/** + * irdma_setup_init_state - set up the initial device struct + * @rf: RDMA PCI function + * + * Initialize the iwarp device and its hdl information + * using the ldev and client information + * Return 0 if successful, otherwise return error + */ +static enum irdma_status_code irdma_setup_init_state(struct irdma_pci_f *rf) +{ + struct irdma_priv_ldev *ldev = &rf->ldev; + enum irdma_status_code status; + + status = irdma_save_msix_info(rf); + if (status) + return status; + + rf->hw.pdev = rf->pdev; + rf->obj_mem.size = ALIGN(8192, IRDMA_HW_PAGE_SIZE); + rf->obj_mem.va = dma_alloc_coherent(hw_to_dev(&rf->hw), + rf->obj_mem.size, &rf->obj_mem.pa, + GFP_KERNEL); + if (!rf->obj_mem.va) { + kfree(rf->iw_msixtbl); + rf->iw_msixtbl = NULL; + return IRDMA_ERR_NO_MEMORY; + } + + rf->obj_next = rf->obj_mem; + rf->ooo = false; + init_waitqueue_head(&rf->vchnl_waitq); + status = irdma_initialize_dev(rf, ldev); + if (status) { + kfree(rf->iw_msixtbl); + dma_free_coherent(hw_to_dev(&rf->hw), rf->obj_mem.size, + rf->obj_mem.va, rf->obj_mem.pa); + rf->obj_mem.va = NULL; + rf->iw_msixtbl = NULL; + } + + return status; +} + +/** + * irdma_get_used_rsrc - determine resources used internally + * @iwdev: irdma device + * + * Called at the end of open to get all internal allocations + */ +static void irdma_get_used_rsrc(struct irdma_device *iwdev) +{ + iwdev->rf->used_pds = find_next_zero_bit(iwdev->rf->allocated_pds, + iwdev->rf->max_pd, 0); + iwdev->rf->used_qps = find_next_zero_bit(iwdev->rf->allocated_qps, + iwdev->rf->max_qp, 0); + iwdev->rf->used_cqs = find_next_zero_bit(iwdev->rf->allocated_cqs, + iwdev->rf->max_cq, 0); + iwdev->rf->used_mrs = find_next_zero_bit(iwdev->rf->allocated_mrs, + iwdev->rf->max_mr, 0); +} + +void irdma_ctrl_deinit_hw(struct irdma_pci_f *rf) +{ + enum init_completion_state state = rf->init_state; + + rf->init_state = INVALID_STATE; + if (rf->rsrc_created) { + irdma_destroy_pble_prm(rf->pble_rsrc); + irdma_del_ceqs(rf); + rf->rsrc_created = false; + } + switch (state) { + case CEQ0_CREATED: + irdma_del_ceq_0(rf); + /* fallthrough */ + case AEQ_CREATED: + irdma_destroy_aeq(rf); + /* fallthrough */ + case CCQ_CREATED: + irdma_destroy_ccq(rf); + /* fallthrough */ + case HW_RSRC_INITIALIZED: + /* fallthrough */ + case HMC_OBJS_CREATED: + irdma_del_hmc_objects(&rf->sc_dev, rf->sc_dev.hmc_info, true, + rf->reset, rf->rdma_ver); + /* fallthrough */ + case CQP_CREATED: + irdma_destroy_cqp(rf, true); + /* fallthrough */ + case INITIAL_STATE: + irdma_del_init_mem(rf); + break; + case INVALID_STATE: + /* fallthrough */ + default: + pr_warn("bad init_state = %d\n", rf->init_state); + break; + } +} + +/** + * irdma_rt_init_hw - Initializes runtime portion of HW + * @rf: RDMA PCI function + * @iwdev: irdma device + * @l2params: qos, tc, mtu info from netdev driver + * + * Create device queues ILQ, IEQ, CEQs and PBLEs. Setup irdma + * device resource objects. + */ +enum irdma_status_code irdma_rt_init_hw(struct irdma_pci_f *rf, + struct irdma_device *iwdev, + struct irdma_l2params *l2params) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + enum irdma_status_code status; + struct irdma_vsi_init_info vsi_info = {}; + struct irdma_vsi_stats_info stats_info = {}; + + list_add(&iwdev->list, &rf->vsi_dev_list); + irdma_sc_rt_init(dev); + vsi_info.vm_vf_type = rf->ldev.ftype ? IRDMA_VF_TYPE : IRDMA_PF_TYPE; + vsi_info.dev = dev; + vsi_info.back_vsi = iwdev; + vsi_info.params = l2params; + vsi_info.pf_data_vsi_num = iwdev->vsi_num; + vsi_info.register_qset = rf->gen_ops.register_qset; + vsi_info.unregister_qset = rf->gen_ops.unregister_qset; + vsi_info.exception_lan_q = 2; + irdma_sc_vsi_init(&iwdev->vsi, &vsi_info); + + status = irdma_setup_cm_core(iwdev, rf->rdma_ver); + if (status) + return status; + + stats_info.pestat = kzalloc(sizeof(*stats_info.pestat), GFP_KERNEL); + if (!stats_info.pestat) { + irdma_cleanup_cm_core(&iwdev->cm_core); + list_del(&iwdev->list); + return IRDMA_ERR_NO_MEMORY; + } + stats_info.fcn_id = dev->hmc_fn_id; + status = irdma_vsi_stats_init(&iwdev->vsi, &stats_info); + if (status) { + irdma_cleanup_cm_core(&iwdev->cm_core); + kfree(stats_info.pestat); + list_del(&iwdev->list); + return status; + } + + do { + if (iwdev->create_ilq) { + status = irdma_initialize_ilq(iwdev); + if (status) + break; + iwdev->init_state = ILQ_CREATED; + } + status = irdma_initialize_ieq(iwdev); + if (status) + break; + iwdev->init_state = IEQ_CREATED; + if (!rf->rsrc_created) { + status = irdma_setup_ceqs(rf, &iwdev->vsi); + if (status) + break; + iwdev->init_state = CEQS_CREATED; + + status = irdma_hmc_init_pble(&rf->sc_dev, + rf->pble_rsrc); + if (status) { + irdma_del_ceqs(rf); + break; + } + spin_lock_init(&rf->pble_rsrc->pble_lock); + iwdev->init_state = PBLE_CHUNK_MEM; + rf->rsrc_created = true; + } + + iwdev->device_cap_flags = IB_DEVICE_LOCAL_DMA_LKEY | + IB_DEVICE_MEM_WINDOW | + IB_DEVICE_MEM_MGT_EXTENSIONS; + + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + irdma_alloc_set_mac(iwdev); + irdma_add_ip(iwdev); + iwdev->init_state = IP_ADDR_REGISTERED; + + /* handles asynch cleanup tasks - disconnect CM , free qp, + * free cq bufs + */ + iwdev->cleanup_wq = alloc_workqueue("irdma-cleanup-wq", + WQ_UNBOUND, WQ_UNBOUND_MAX_ACTIVE); + if (!iwdev->cleanup_wq) + return IRDMA_ERR_NO_MEMORY; + irdma_get_used_rsrc(iwdev); + init_waitqueue_head(&iwdev->suspend_wq); + + return 0; + } while (0); + + dev_err(rfdev_to_dev(dev), "VSI open FAIL status = %d last cmpl = %d\n", + status, iwdev->init_state); + irdma_rt_deinit_hw(iwdev); + + return status; +} + +/** + * irdma_ctrl_init_hw - Initializes control portion of HW + * @rf: RDMA PCI function + * + * Create admin queues, HMC obejcts and RF resource objects + */ +enum irdma_status_code irdma_ctrl_init_hw(struct irdma_pci_f *rf) +{ + struct irdma_sc_dev *dev = &rf->sc_dev; + enum irdma_status_code status; + + INIT_LIST_HEAD(&rf->vsi_dev_list); + + do { + status = irdma_setup_init_state(rf); + if (status) + break; + rf->init_state = INITIAL_STATE; + + status = irdma_create_cqp(rf); + if (status) + break; + rf->init_state = CQP_CREATED; + + status = irdma_hmc_setup(rf); + if (status) + break; + rf->init_state = HMC_OBJS_CREATED; + + status = irdma_initialize_hw_rsrc(rf); + if (status) + break; + rf->init_state = HW_RSRC_INITIALIZED; + + status = irdma_create_ccq(rf); + if (status) + break; + rf->init_state = CCQ_CREATED; + + status = irdma_setup_aeq(rf); + if (status) + break; + rf->init_state = AEQ_CREATED; + rf->sc_dev.feature_info[IRDMA_FEATURE_FW_INFO] = IRDMA_FW_VER_DEFAULT; + + if (rf->rdma_ver != IRDMA_GEN_1) + status = irdma_get_rdma_features(&rf->sc_dev); + if (!status) { + u32 fw_ver = dev->feature_info[IRDMA_FEATURE_FW_INFO]; + u8 hw_rev = dev->hw_attrs.uk_attrs.hw_rev; + + if ((hw_rev == IRDMA_GEN_1 && fw_ver >= IRDMA_FW_VER_0x30010) || + (hw_rev != IRDMA_GEN_1 && fw_ver >= IRDMA_FW_VER_0x1000D)) + + dev->hw_attrs.uk_attrs.feature_flags |= IRDMA_FEATURE_RTS_AE | + IRDMA_FEATURE_CQ_RESIZE; + } + + status = irdma_setup_ceq_0(rf); + if (status) + break; + rf->init_state = CEQ0_CREATED; + /* Handles processing of CQP completions */ + rf->cqp_cmpl_wq = alloc_ordered_workqueue("cqp_cmpl_wq", + WQ_HIGHPRI | WQ_UNBOUND); + if (!rf->cqp_cmpl_wq) { + status = IRDMA_ERR_NO_MEMORY; + break; + } + INIT_WORK(&rf->cqp_cmpl_work, cqp_compl_worker); + dev->ccq_ops->ccq_arm(dev->ccq); + return 0; + } while (0); + + pr_err("IRDMA hardware initialization FAILED init_state=%d status=%d\n", + rf->init_state, status); + irdma_ctrl_deinit_hw(rf); + return status; +} + +/** + * irdma_initialize_hw_resources - initialize hw resource tracking array + * @rf: RDMA PCI function + */ +u32 irdma_initialize_hw_rsrc(struct irdma_pci_f *rf) +{ + unsigned long num_pds; + u32 rsrc_size; + u32 max_mr; + u32 max_qp; + u32 max_cq; + u32 arp_table_size; + u32 mrdrvbits; + void *rsrc_ptr; + u32 num_ahs; + u32 num_mcg; + + if (rf->rdma_ver != IRDMA_GEN_1) { + rf->allocated_ws_nodes = + kcalloc(BITS_TO_LONGS(IRDMA_MAX_WS_NODES), + sizeof(unsigned long), GFP_KERNEL); + if (!rf->allocated_ws_nodes) + return -ENOMEM; + + set_bit(0, rf->allocated_ws_nodes); + rf->max_ws_node_id = IRDMA_MAX_WS_NODES; + } + max_qp = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt; + max_cq = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt; + max_mr = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt; + arp_table_size = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].cnt; + rf->max_cqe = rf->sc_dev.hw_attrs.uk_attrs.max_hw_cq_size; + num_pds = rf->sc_dev.hw_attrs.max_hw_pds; + rsrc_size = sizeof(struct irdma_arp_entry) * arp_table_size; + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(max_qp); + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(max_mr); + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(max_cq); + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(num_pds); + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(arp_table_size); + num_ahs = max_qp * 4; + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(num_ahs); + num_mcg = max_qp; + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(num_mcg); + rsrc_size += sizeof(struct irdma_qp **) * max_qp; + + rf->mem_rsrc = kzalloc(rsrc_size, GFP_KERNEL); + if (!rf->mem_rsrc) { + kfree(rf->allocated_ws_nodes); + rf->allocated_ws_nodes = NULL; + return -ENOMEM; + } + + rf->max_qp = max_qp; + rf->max_mr = max_mr; + rf->max_cq = max_cq; + rf->max_pd = num_pds; + rf->arp_table_size = arp_table_size; + rf->arp_table = (struct irdma_arp_entry *)rf->mem_rsrc; + rsrc_ptr = rf->mem_rsrc + + (sizeof(struct irdma_arp_entry) * arp_table_size); + rf->max_ah = num_ahs; + rf->max_mcg = num_mcg; + rf->allocated_qps = rsrc_ptr; + rf->allocated_cqs = &rf->allocated_qps[BITS_TO_LONGS(max_qp)]; + rf->allocated_mrs = &rf->allocated_cqs[BITS_TO_LONGS(max_cq)]; + rf->allocated_pds = &rf->allocated_mrs[BITS_TO_LONGS(max_mr)]; + rf->allocated_ahs = &rf->allocated_pds[BITS_TO_LONGS(num_pds)]; + rf->allocated_mcgs = &rf->allocated_ahs[BITS_TO_LONGS(num_ahs)]; + rf->allocated_arps = &rf->allocated_mcgs[BITS_TO_LONGS(num_mcg)]; + rf->qp_table = (struct irdma_qp **) + (&rf->allocated_arps[BITS_TO_LONGS(arp_table_size)]); + + set_bit(0, rf->allocated_mrs); + set_bit(0, rf->allocated_qps); + set_bit(0, rf->allocated_cqs); + set_bit(0, rf->allocated_pds); + set_bit(0, rf->allocated_arps); + set_bit(0, rf->allocated_ahs); + set_bit(0, rf->allocated_mcgs); + set_bit(2, rf->allocated_qps); /* qp 2 IEQ */ + set_bit(1, rf->allocated_qps); /* qp 1 ILQ */ + set_bit(1, rf->allocated_cqs); + set_bit(1, rf->allocated_pds); + set_bit(2, rf->allocated_cqs); + set_bit(2, rf->allocated_pds); + + spin_lock_init(&rf->rsrc_lock); + spin_lock_init(&rf->arp_lock); + spin_lock_init(&rf->qptable_lock); + spin_lock_init(&rf->qh_list_lock); + + INIT_LIST_HEAD(&rf->mc_qht_list.list); + /* stag index mask has a minimum of 14 bits */ + mrdrvbits = 24 - max(get_count_order(rf->max_mr), 14); + rf->mr_stagmask = ~(((1 << mrdrvbits) - 1) << (32 - mrdrvbits)); + + return 0; +} + +/** + * irdma_cqp_ce_handler - handle cqp completions + * @rf: RDMA PCI function + * @cq: cq for cqp completions + */ +void irdma_cqp_ce_handler(struct irdma_pci_f *rf, struct irdma_sc_cq *cq) +{ + struct irdma_cqp_request *cqp_request; + struct irdma_sc_dev *dev = &rf->sc_dev; + u32 cqe_count = 0; + struct irdma_ccq_cqe_info info; + unsigned long flags; + int ret; + + do { + memset(&info, 0, sizeof(info)); + spin_lock_irqsave(&rf->cqp.compl_lock, flags); + ret = dev->ccq_ops->ccq_get_cqe_info(cq, &info); + spin_unlock_irqrestore(&rf->cqp.compl_lock, flags); + if (ret) + break; + + cqp_request = (struct irdma_cqp_request *) + (unsigned long)info.scratch; + if (info.error) + dev_dbg(rfdev_to_dev(dev), + "ERR: opcode = 0x%x maj_err_code = 0x%x min_err_code = 0x%x\n", + info.op_code, info.maj_err_code, + info.min_err_code); + if (cqp_request) { + cqp_request->compl_info.maj_err_code = info.maj_err_code; + cqp_request->compl_info.min_err_code = info.min_err_code; + cqp_request->compl_info.op_ret_val = info.op_ret_val; + cqp_request->compl_info.error = info.error; + + if (cqp_request->waiting) { + cqp_request->request_done = true; + wake_up(&cqp_request->waitq); + irdma_put_cqp_request(&rf->cqp, cqp_request); + } else { + if (cqp_request->callback_fcn) + cqp_request->callback_fcn(cqp_request); + irdma_put_cqp_request(&rf->cqp, cqp_request); + } + } + + cqe_count++; + } while (1); + + if (cqe_count) { + irdma_process_bh(dev); + dev->ccq_ops->ccq_arm(cq); + } +} + +/** + * cqp_compl_worker - Handle cqp completions + * @work: Pointer to work structure + */ +void cqp_compl_worker(struct work_struct *work) +{ + struct irdma_pci_f *rf = container_of(work, struct irdma_pci_f, + cqp_cmpl_work); + struct irdma_sc_cq *cq = &rf->ccq.sc_cq; + + irdma_cqp_ce_handler(rf, cq); +} + +/** + * irdma_next_iw_state - modify qp state + * @iwqp: iwarp qp to modify + * @state: next state for qp + * @del_hash: del hash + * @term: term message + * @termlen: length of term message + */ +void irdma_next_iw_state(struct irdma_qp *iwqp, u8 state, u8 del_hash, u8 term, + u8 termlen) +{ + struct irdma_modify_qp_info info = {}; + + info.next_iwarp_state = state; + info.remove_hash_idx = del_hash; + info.cq_num_valid = true; + info.arp_cache_idx_valid = true; + info.dont_send_term = true; + info.dont_send_fin = true; + info.termlen = termlen; + + if (term & IRDMAQP_TERM_SEND_TERM_ONLY) + info.dont_send_term = false; + if (term & IRDMAQP_TERM_SEND_FIN_ONLY) + info.dont_send_fin = false; + if (iwqp->sc_qp.term_flags && state == IRDMA_QP_STATE_ERROR) + info.reset_tcp_conn = true; + iwqp->hw_iwarp_state = state; + irdma_hw_modify_qp(iwqp->iwdev, iwqp, &info, 0); + iwqp->iwarp_state = info.next_iwarp_state; +} + +/** + * irdma_del_mac_entry - remove a mac entry from the hw table + * @rf: RDMA PCI function + * @idx: the index of the mac ip address to delete + */ +void irdma_del_local_mac_entry(struct irdma_pci_f *rf, u16 idx) +{ + struct irdma_cqp *iwcqp = &rf->cqp; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + enum irdma_status_code status = 0; + + cqp_request = irdma_get_cqp_request(iwcqp, true); + if (!cqp_request) { + pr_err("cqp_request memory failed\n"); + return; + } + + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_DELETE_LOCAL_MAC_ENTRY; + cqp_info->post_sq = 1; + cqp_info->in.u.del_local_mac_entry.cqp = &iwcqp->sc_cqp; + cqp_info->in.u.del_local_mac_entry.scratch = (uintptr_t)cqp_request; + cqp_info->in.u.del_local_mac_entry.entry_idx = idx; + cqp_info->in.u.del_local_mac_entry.ignore_ref_count = 0; + status = irdma_handle_cqp_op(rf, cqp_request); + if (status) + pr_err("CQP-OP Del MAC entry fail"); +} + +/** + * irdma_add_mac_entry - add a mac ip address entry to the hw table + * @rf: RDMA PCI function + * @mac_addr: pointer to mac address + * @idx: the index of the mac ip address to add + */ +int irdma_add_local_mac_entry(struct irdma_pci_f *rf, u8 *mac_addr, u16 idx) +{ + struct irdma_local_mac_entry_info *info; + struct irdma_cqp *iwcqp = &rf->cqp; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + enum irdma_status_code status = 0; + + cqp_request = irdma_get_cqp_request(iwcqp, true); + if (!cqp_request) { + pr_err("cqp_request memory failed\n"); + return IRDMA_ERR_NO_MEMORY; + } + + cqp_info = &cqp_request->info; + cqp_info->post_sq = 1; + info = &cqp_info->in.u.add_local_mac_entry.info; + ether_addr_copy(info->mac_addr, mac_addr); + info->entry_idx = idx; + cqp_info->in.u.add_local_mac_entry.scratch = (uintptr_t)cqp_request; + cqp_info->cqp_cmd = IRDMA_OP_ADD_LOCAL_MAC_ENTRY; + cqp_info->in.u.add_local_mac_entry.cqp = &iwcqp->sc_cqp; + cqp_info->in.u.add_local_mac_entry.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(rf, cqp_request); + if (status) + pr_err("CQP-OP Add MAC entry fail"); + + return status; +} + +/** + * irdma_alloc_local_mac_entry - allocate a mac entry + * @rf: RDMA PCI function + * @mac_tbl_idx: the index of the new mac address + * + * Allocate a mac address entry and update the mac_tbl_idx + * to hold the index of the newly created mac address + * Return 0 if successful, otherwise return error + */ +int irdma_alloc_local_mac_entry(struct irdma_pci_f *rf, u16 *mac_tbl_idx) +{ + struct irdma_cqp *iwcqp = &rf->cqp; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + enum irdma_status_code status = 0; + + cqp_request = irdma_get_cqp_request(iwcqp, true); + if (!cqp_request) { + pr_err("cqp_request memory failed\n"); + return IRDMA_ERR_NO_MEMORY; + } + + /* increment refcount, because we need the cqp request ret value */ + refcount_inc(&cqp_request->refcnt); + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_ALLOC_LOCAL_MAC_ENTRY; + cqp_info->post_sq = 1; + cqp_info->in.u.alloc_local_mac_entry.cqp = &iwcqp->sc_cqp; + cqp_info->in.u.alloc_local_mac_entry.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(rf, cqp_request); + if (!status) + *mac_tbl_idx = (u16)cqp_request->compl_info.op_ret_val; + else + pr_err("CQP-OP Alloc MAC entry fail"); + /* decrement refcount and free the cqp request, if no longer used */ + irdma_put_cqp_request(iwcqp, cqp_request); + + return status; +} + +/** + * irdma_cqp_manage_apbvt_cmd - send cqp command manage apbvt + * @iwdev: irdma device + * @accel_local_port: port for apbvt + * @add_port: add ordelete port + */ +static enum irdma_status_code +irdma_cqp_manage_apbvt_cmd(struct irdma_device *iwdev, u16 accel_local_port, + bool add_port) +{ + struct irdma_apbvt_info *info; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + enum irdma_status_code status; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, add_port); + if (!cqp_request) + return IRDMA_ERR_NO_MEMORY; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.manage_apbvt_entry.info; + memset(info, 0, sizeof(*info)); + info->add = add_port; + info->port = accel_local_port; + cqp_info->cqp_cmd = IRDMA_OP_MANAGE_APBVT_ENTRY; + cqp_info->post_sq = 1; + cqp_info->in.u.manage_apbvt_entry.cqp = &iwdev->rf->cqp.sc_cqp; + cqp_info->in.u.manage_apbvt_entry.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) + dev_dbg(rfdev_to_dev(&iwdev->rf->sc_dev), + "ERR: CQP-OP Manage APBVT entry fail"); + + return status; +} + +/** + * irdma_manage_apbvt - add or delete tcp port + * @iwdev: irdma device + * @accel_local_port: port for apbvt + * @add_port: add or delete port + */ +enum irdma_status_code irdma_manage_apbvt(struct irdma_device *iwdev, + u16 accel_local_port, bool add_port) +{ + struct irdma_cm_core *cm_core = &iwdev->cm_core; + enum irdma_status_code status = 0; + unsigned long flags; + bool in_use; + + /* apbvt_lock is held across CQP delete APBVT OP (non-waiting) to + * protect against race where add APBVT CQP can race ahead of the delete + * APBVT for same port. + */ + if (add_port) { + spin_lock_irqsave(&cm_core->apbvt_lock, flags); + in_use = __test_and_set_bit(accel_local_port, + cm_core->ports_in_use); + spin_unlock_irqrestore(&cm_core->apbvt_lock, flags); + if (in_use) + return 0; + return irdma_cqp_manage_apbvt_cmd(iwdev, accel_local_port, + true); + } else { + spin_lock_irqsave(&cm_core->apbvt_lock, flags); + in_use = irdma_port_in_use(cm_core, accel_local_port); + if (in_use) { + spin_unlock_irqrestore(&cm_core->apbvt_lock, flags); + return 0; + } + __clear_bit(accel_local_port, cm_core->ports_in_use); + status = irdma_cqp_manage_apbvt_cmd(iwdev, accel_local_port, + false); + spin_unlock_irqrestore(&cm_core->apbvt_lock, flags); + return status; + } +} + +/** + * irdma_manage_arp_cache - manage hw arp cache + * @rf: RDMA PCI function + * @mac_addr: mac address ptr + * @ip_addr: ip addr for arp cache + * @ipv4: flag inicating IPv4 + * @action: add, delete or modify + */ +void irdma_manage_arp_cache(struct irdma_pci_f *rf, unsigned char *mac_addr, + u32 *ip_addr, bool ipv4, u32 action) +{ + struct irdma_add_arp_cache_entry_info *info; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + int arp_index; + + arp_index = irdma_arp_table(rf, ip_addr, ipv4, mac_addr, action); + if (arp_index == -1) + return; + + cqp_request = irdma_get_cqp_request(&rf->cqp, false); + if (!cqp_request) + return; + + cqp_info = &cqp_request->info; + if (action == IRDMA_ARP_ADD) { + cqp_info->cqp_cmd = IRDMA_OP_ADD_ARP_CACHE_ENTRY; + info = &cqp_info->in.u.add_arp_cache_entry.info; + memset(info, 0, sizeof(*info)); + info->arp_index = (u16)arp_index; + info->permanent = true; + ether_addr_copy(info->mac_addr, mac_addr); + cqp_info->in.u.add_arp_cache_entry.scratch = + (uintptr_t)cqp_request; + cqp_info->in.u.add_arp_cache_entry.cqp = &rf->cqp.sc_cqp; + } else { + cqp_info->cqp_cmd = IRDMA_OP_DELETE_ARP_CACHE_ENTRY; + cqp_info->in.u.del_arp_cache_entry.scratch = + (uintptr_t)cqp_request; + cqp_info->in.u.del_arp_cache_entry.cqp = &rf->cqp.sc_cqp; + cqp_info->in.u.del_arp_cache_entry.arp_index = arp_index; + } + + cqp_info->in.u.add_arp_cache_entry.cqp = &rf->cqp.sc_cqp; + cqp_info->in.u.add_arp_cache_entry.scratch = (uintptr_t)cqp_request; + cqp_info->post_sq = 1; + if (irdma_handle_cqp_op(rf, cqp_request)) + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: CQP-OP Add/Del Arp Cache entry fail"); +} + +/** + * irdma_send_syn_cqp_callback - do syn/ack after qhash + * @cqp_request: qhash cqp completion + */ +static void irdma_send_syn_cqp_callback(struct irdma_cqp_request *cqp_request) +{ + irdma_send_syn(cqp_request->param, 1); +} + +/** + * irdma_manage_qhash - add or modify qhash + * @iwdev: irdma device + * @cminfo: cm info for qhash + * @etype: type (syn or quad) + * @mtype: type of qhash + * @cmnode: cmnode associated with connection + * @wait: wait for completion + */ +enum irdma_status_code +irdma_manage_qhash(struct irdma_device *iwdev, struct irdma_cm_info *cminfo, + enum irdma_quad_entry_type etype, + enum irdma_quad_hash_manage_type mtype, void *cmnode, + bool wait) +{ + struct irdma_qhash_table_info *info; + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + enum irdma_status_code status; + struct irdma_cqp *iwcqp = &iwdev->rf->cqp; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_get_cqp_request(iwcqp, wait); + if (!cqp_request) + return IRDMA_ERR_NO_MEMORY; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.manage_qhash_table_entry.info; + memset(info, 0, sizeof(*info)); + info->vsi = &iwdev->vsi; + info->manage = mtype; + info->entry_type = etype; + if (cminfo->vlan_id < VLAN_N_VID) { + info->vlan_valid = true; + info->vlan_id = cminfo->vlan_id; + } else { + info->vlan_valid = false; + } + info->ipv4_valid = cminfo->ipv4; + info->user_pri = cminfo->user_pri; + ether_addr_copy(info->mac_addr, iwdev->netdev->dev_addr); + info->qp_num = cminfo->qh_qpid; + info->dest_port = cminfo->loc_port; + info->dest_ip[0] = cminfo->loc_addr[0]; + info->dest_ip[1] = cminfo->loc_addr[1]; + info->dest_ip[2] = cminfo->loc_addr[2]; + info->dest_ip[3] = cminfo->loc_addr[3]; + if (etype == IRDMA_QHASH_TYPE_TCP_ESTABLISHED || + etype == IRDMA_QHASH_TYPE_UDP_UNICAST || + etype == IRDMA_QHASH_TYPE_UDP_MCAST || + etype == IRDMA_QHASH_TYPE_ROCE_MCAST || + etype == IRDMA_QHASH_TYPE_ROCEV2_HW) { + info->src_port = cminfo->rem_port; + info->src_ip[0] = cminfo->rem_addr[0]; + info->src_ip[1] = cminfo->rem_addr[1]; + info->src_ip[2] = cminfo->rem_addr[2]; + info->src_ip[3] = cminfo->rem_addr[3]; + } + if (cmnode) { + cqp_request->callback_fcn = irdma_send_syn_cqp_callback; + cqp_request->param = cmnode; + } + if (info->ipv4_valid) + dev_dbg(rfdev_to_dev(dev), + "CM: %s IP=%pI4, port=%d, mac=%pM, vlan_id=%d\n", + !mtype ? "DELETE" : "ADD", info->dest_ip, + info->dest_port, info->mac_addr, cminfo->vlan_id); + else + dev_dbg(rfdev_to_dev(dev), + "CM: %s IP=%pI6, port=%d, mac=%pM, vlan_id=%d\n", + !mtype ? "DELETE" : "ADD", info->dest_ip, + info->dest_port, info->mac_addr, cminfo->vlan_id); + cqp_info->in.u.manage_qhash_table_entry.cqp = &iwdev->rf->cqp.sc_cqp; + cqp_info->in.u.manage_qhash_table_entry.scratch = (uintptr_t)cqp_request; + cqp_info->cqp_cmd = IRDMA_OP_MANAGE_QHASH_TABLE_ENTRY; + cqp_info->post_sq = 1; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) + dev_dbg(rfdev_to_dev(dev), + "ERR: CQP-OP Manage Qhash Entry fail"); + + return status; +} + +/** + * irdma_post_qp_fatal - Post QP_FATAL event associated with given QP + * @qp: QP associated with QP_FATL event + */ +static inline void irdma_post_qp_fatal(struct irdma_qp *qp) +{ + struct ib_event ibevent; + + if (qp->ibqp.event_handler) { + ibevent.device = qp->ibqp.device; + ibevent.event = IB_EVENT_QP_FATAL; + ibevent.element.qp = &qp->ibqp; + qp->ibqp.event_handler(&ibevent, qp->ibqp.qp_context); + } +} + +/** + * irdma_hw_flush_wqes_callback - Check return code after flush + * @cqp_request: qhash cqp completion + */ +static void irdma_hw_flush_wqes_callback(struct irdma_cqp_request *cqp_request) +{ + struct irdma_qp_flush_info *hw_info; + struct irdma_sc_qp *qp; + struct irdma_qp *iwqp; + struct cqp_cmds_info *cqp_info; + + cqp_info = &cqp_request->info; + hw_info = &cqp_request->info.in.u.qp_flush_wqes.info; + qp = cqp_info->in.u.qp_flush_wqes.qp; + iwqp = qp->qp_uk.back_qp; + + if (cqp_request->compl_info.maj_err_code) + return; + if (hw_info->rq && + (cqp_request->compl_info.min_err_code == IRDMA_CQP_COMPL_SQ_WQE_FLUSHED || + cqp_request->compl_info.min_err_code == 0)) { + /* RQ WQE flush was requested but did not happen */ + qp->qp_uk.rq_flush_complete = true; + complete(&iwqp->rq_drained); + } + if (hw_info->sq && + (cqp_request->compl_info.min_err_code == IRDMA_CQP_COMPL_RQ_WQE_FLUSHED || + cqp_request->compl_info.min_err_code == 0)) { + qp->qp_uk.sq_flush_complete = true; + complete(&iwqp->sq_drained); + } +} + +/** + * irdma_hw_flush_wqes - flush qp's wqe + * @rf: RDMA PCI function + * @qp: hardware control qp + * @info: info for flush + * @wait: flag wait for completion + */ +enum irdma_status_code irdma_hw_flush_wqes(struct irdma_pci_f *rf, + struct irdma_sc_qp *qp, + struct irdma_qp_flush_info *info, + bool wait) +{ + enum irdma_status_code status; + struct irdma_qp_flush_info *hw_info; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_qp *iwqp = qp->qp_uk.back_qp; + unsigned long flags = 0; + + cqp_request = irdma_get_cqp_request(&rf->cqp, wait); + if (!cqp_request) + return IRDMA_ERR_NO_MEMORY; + + cqp_info = &cqp_request->info; + if (!wait) + cqp_request->callback_fcn = irdma_hw_flush_wqes_callback; + hw_info = &cqp_request->info.in.u.qp_flush_wqes.info; + memcpy(hw_info, info, sizeof(*hw_info)); + cqp_info->cqp_cmd = IRDMA_OP_QP_FLUSH_WQES; + cqp_info->post_sq = 1; + cqp_info->in.u.qp_flush_wqes.qp = qp; + cqp_info->in.u.qp_flush_wqes.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(rf, cqp_request); + if (status) { + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: CQP-OP Flush WQE's fail"); + complete(&iwqp->sq_drained); + complete(&iwqp->rq_drained); + qp->qp_uk.sq_flush_complete = true; + qp->qp_uk.rq_flush_complete = true; + return status; + } + + if (!wait || cqp_request->compl_info.maj_err_code) + return 0; + + if (info->rq) { + if (cqp_request->compl_info.min_err_code == IRDMA_CQP_COMPL_SQ_WQE_FLUSHED || + cqp_request->compl_info.min_err_code == 0) { + /* RQ WQE flush was requested but did not happen */ + qp->qp_uk.rq_flush_complete = true; + complete(&iwqp->rq_drained); + } + } + if (info->sq) { + if (cqp_request->compl_info.min_err_code == IRDMA_CQP_COMPL_RQ_WQE_FLUSHED || + cqp_request->compl_info.min_err_code == 0) { + spin_lock_irqsave(&iwqp->lock, flags); + /* + * Handling case where WQE is posted to empty SQ when + * flush has not completed + */ + if (IRDMA_RING_MORE_WORK(qp->qp_uk.sq_ring)) { + struct irdma_cqp_request *new_req; + + if (!qp->qp_uk.sq_flush_complete) { + spin_unlock_irqrestore(&iwqp->lock, flags); + return 0; + } + qp->qp_uk.sq_flush_complete = false; + qp->flush_sq = false; + spin_unlock_irqrestore(&iwqp->lock, flags); + + info->rq = false; + info->sq = true; + new_req = irdma_get_cqp_request(&rf->cqp, true); + if (!new_req) + return IRDMA_ERR_NO_MEMORY; + cqp_info = &new_req->info; + hw_info = &new_req->info.in.u.qp_flush_wqes.info; + memcpy(hw_info, info, sizeof(*hw_info)); + cqp_info->cqp_cmd = IRDMA_OP_QP_FLUSH_WQES; + cqp_info->post_sq = 1; + cqp_info->in.u.qp_flush_wqes.qp = qp; + cqp_info->in.u.qp_flush_wqes.scratch = (uintptr_t)new_req; + + status = irdma_handle_cqp_op(rf, new_req); + if (new_req->compl_info.maj_err_code || + new_req->compl_info.min_err_code != IRDMA_CQP_COMPL_SQ_WQE_FLUSHED || + status) { + pr_err("SQ in error but not flushed"); + qp->qp_uk.sq_flush_complete = false; + irdma_post_qp_fatal(iwqp); + } + } else { + /* SQ WQE flush was requested but did not happen */ + qp->qp_uk.sq_flush_complete = true; + spin_unlock_irqrestore(&iwqp->lock, flags); + complete(&iwqp->sq_drained); + } + } else { + spin_lock_irqsave(&iwqp->lock, flags); + if (!IRDMA_RING_MORE_WORK(qp->qp_uk.sq_ring)) { + qp->qp_uk.sq_flush_complete = true; + spin_unlock_irqrestore(&iwqp->lock, flags); + complete(&iwqp->sq_drained); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + } + } + } + + return 0; +} + +/** + * irdma_gen_ae - generate AE + * @rf: RDMA PCI function + * @qp: qp associated with AE + * @info: info for ae + * @wait: wait for completion + */ +void irdma_gen_ae(struct irdma_pci_f *rf, struct irdma_sc_qp *qp, + struct irdma_gen_ae_info *info, bool wait) +{ + struct irdma_gen_ae_info *ae_info; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_get_cqp_request(&rf->cqp, wait); + if (!cqp_request) + return; + + cqp_info = &cqp_request->info; + ae_info = &cqp_request->info.in.u.gen_ae.info; + memcpy(ae_info, info, sizeof(*ae_info)); + cqp_info->cqp_cmd = IRDMA_OP_GEN_AE; + cqp_info->post_sq = 1; + cqp_info->in.u.gen_ae.qp = qp; + cqp_info->in.u.gen_ae.scratch = (uintptr_t)cqp_request; + if (irdma_handle_cqp_op(rf, cqp_request)) + dev_dbg(rfdev_to_dev(&rf->sc_dev), + "ERR: CQP OP failed attempting to generate ae_code=0x%x\n", + info->ae_code); +} + +/** + * irdma_get_ib_wc - return change flush code to IB's + * @opcode: iwarp flush code + */ +static enum ib_wc_status irdma_get_ib_wc(enum irdma_flush_opcode opcode) +{ + switch (opcode) { + case FLUSH_PROT_ERR: + return IB_WC_LOC_PROT_ERR; + case FLUSH_REM_ACCESS_ERR: + return IB_WC_REM_ACCESS_ERR; + case FLUSH_LOC_QP_OP_ERR: + return IB_WC_LOC_QP_OP_ERR; + case FLUSH_REM_OP_ERR: + return IB_WC_REM_OP_ERR; + case FLUSH_LOC_LEN_ERR: + return IB_WC_LOC_LEN_ERR; + case FLUSH_GENERAL_ERR: + return IB_WC_GENERAL_ERR; + case FLUSH_FATAL_ERR: + default: + return IB_WC_FATAL_ERR; + } +} + +void irdma_flush_wqes(struct irdma_qp *iwqp, u32 flush_mask) +{ + struct irdma_qp_flush_info info = {}; + struct irdma_pci_f *rf = iwqp->iwdev->rf; + u8 opcode = iwqp->sc_qp.flush_code; + + if (!(flush_mask & IRDMA_FLUSH_SQ) && !(flush_mask & IRDMA_FLUSH_RQ)) + return; + + /* Set flush info fields*/ + info.sq = flush_mask & IRDMA_FLUSH_SQ; + info.rq = flush_mask & IRDMA_FLUSH_RQ; + + if (flush_mask & IRDMA_REFLUSH) { + if (info.sq) + iwqp->sc_qp.flush_sq = false; + if (info.rq) + iwqp->sc_qp.flush_rq = false; + } + + /* Generate userflush errors in CQE */ + if (opcode) { + if (info.sq) { + info.sq_minor_code = (u16)irdma_get_ib_wc(opcode); + info.sq_major_code = IRDMA_FLUSH_MAJOR_ERR; + } + if (info.rq) { + info.rq_minor_code = (u16)irdma_get_ib_wc(opcode); + info.rq_major_code = IRDMA_FLUSH_MAJOR_ERR; + } + info.userflushcode = true; + } + + if (irdma_upload_context && !(flush_mask & IRDMA_REFLUSH) && + irdma_upload_qp_context(iwqp, 0, 1)) + dev_warn(rfdev_to_dev(&rf->sc_dev), + "failed to upload QP context\n"); + + /* Issue flush */ + (void)irdma_hw_flush_wqes(rf, &iwqp->sc_qp, &info, + flush_mask & IRDMA_FLUSH_WAIT); + iwqp->flush_issued = true; +} diff --git a/drivers/infiniband/hw/irdma/i40iw_hw.c b/drivers/infiniband/hw/irdma/i40iw_hw.c new file mode 100644 index 000000000000..8abee8aaf6f5 --- /dev/null +++ b/drivers/infiniband/hw/irdma/i40iw_hw.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include "osdep.h" +#include "type.h" +#include "i40iw_hw.h" +#include "status.h" +#include "protos.h" + +#define I40E_CQPSQ_CQ_CQID_SHIFT 0 +#define I40E_CQPSQ_CQ_CQID_MASK \ + (0xffffULL << I40E_CQPSQ_CQ_CQID_SHIFT) + +static u32 i40iw_regs[IRDMA_MAX_REGS] = { + I40E_PFPE_CQPTAIL, + I40E_PFPE_CQPDB, + I40E_PFPE_CCQPSTATUS, + I40E_PFPE_CCQPHIGH, + I40E_PFPE_CCQPLOW, + I40E_PFPE_CQARM, + I40E_PFPE_CQACK, + I40E_PFPE_AEQALLOC, + I40E_PFPE_CQPERRCODES, + I40E_PFPE_WQEALLOC, + I40E_PFINT_DYN_CTLN(0), + I40IW_DB_ADDR_OFFSET, + + I40E_GLPCI_LBARCTRL, + I40E_GLPE_CPUSTATUS0, + I40E_GLPE_CPUSTATUS1, + I40E_GLPE_CPUSTATUS2, + I40E_PFINT_AEQCTL, + I40E_PFINT_CEQCTL(0), + I40E_VSIQF_CTL(0), + I40E_PFHMC_PDINV, + I40E_GLHMC_VFPDINV(0) +}; + +static u32 i40iw_stat_offsets_32[IRDMA_HW_STAT_INDEX_MAX_32] = { + I40E_GLPES_PFIP4RXDISCARD(0), + I40E_GLPES_PFIP4RXTRUNC(0), + I40E_GLPES_PFIP4TXNOROUTE(0), + I40E_GLPES_PFIP6RXDISCARD(0), + I40E_GLPES_PFIP6RXTRUNC(0), + I40E_GLPES_PFIP6TXNOROUTE(0), + I40E_GLPES_PFTCPRTXSEG(0), + I40E_GLPES_PFTCPRXOPTERR(0), + I40E_GLPES_PFTCPRXPROTOERR(0), + I40E_GLPES_PFRXVLANERR(0) +}; + +static u32 i40iw_stat_offsets_64[IRDMA_HW_STAT_INDEX_MAX_64] = { + I40E_GLPES_PFIP4RXOCTSLO(0), + I40E_GLPES_PFIP4RXPKTSLO(0), + I40E_GLPES_PFIP4RXFRAGSLO(0), + I40E_GLPES_PFIP4RXMCPKTSLO(0), + I40E_GLPES_PFIP4TXOCTSLO(0), + I40E_GLPES_PFIP4TXPKTSLO(0), + I40E_GLPES_PFIP4TXFRAGSLO(0), + I40E_GLPES_PFIP4TXMCPKTSLO(0), + I40E_GLPES_PFIP6RXOCTSLO(0), + I40E_GLPES_PFIP6RXPKTSLO(0), + I40E_GLPES_PFIP6RXFRAGSLO(0), + I40E_GLPES_PFIP6RXMCPKTSLO(0), + I40E_GLPES_PFIP6TXOCTSLO(0), + I40E_GLPES_PFIP6TXPKTSLO(0), + I40E_GLPES_PFIP6TXFRAGSLO(0), + I40E_GLPES_PFIP6TXMCPKTSLO(0), + I40E_GLPES_PFTCPRXSEGSLO(0), + I40E_GLPES_PFTCPTXSEGLO(0), + I40E_GLPES_PFRDMARXRDSLO(0), + I40E_GLPES_PFRDMARXSNDSLO(0), + I40E_GLPES_PFRDMARXWRSLO(0), + I40E_GLPES_PFRDMATXRDSLO(0), + I40E_GLPES_PFRDMATXSNDSLO(0), + I40E_GLPES_PFRDMATXWRSLO(0), + I40E_GLPES_PFRDMAVBNDLO(0), + I40E_GLPES_PFRDMAVINVLO(0), + I40E_GLPES_PFIP4RXMCOCTSLO(0), + I40E_GLPES_PFIP4TXMCOCTSLO(0), + I40E_GLPES_PFIP6RXMCOCTSLO(0), + I40E_GLPES_PFIP6TXMCOCTSLO(0), + I40E_GLPES_PFUDPRXPKTSLO(0), + I40E_GLPES_PFUDPTXPKTSLO(0) +}; + +static u64 i40iw_masks[IRDMA_MAX_MASKS] = { + I40E_PFPE_CCQPSTATUS_CCQP_DONE_MASK, + I40E_PFPE_CCQPSTATUS_CCQP_ERR_MASK, + I40E_CQPSQ_STAG_PDID_MASK, + I40E_CQPSQ_CQ_CEQID_MASK, + I40E_CQPSQ_CQ_CQID_MASK, +}; + +static u64 i40iw_shifts[IRDMA_MAX_SHIFTS] = { + I40E_PFPE_CCQPSTATUS_CCQP_DONE_SHIFT, + I40E_PFPE_CCQPSTATUS_CCQP_ERR_SHIFT, + I40E_CQPSQ_STAG_PDID_SHIFT, + I40E_CQPSQ_CQ_CEQID_SHIFT, + I40E_CQPSQ_CQ_CQID_SHIFT, +}; + +static struct irdma_irq_ops i40iw_irq_ops; + +/** + * i40iw_config_ceq- Configure CEQ interrupt + * @dev: pointer to the device structure + * @ceq_id: Completion Event Queue ID + * @idx: vector index + */ +static void i40iw_config_ceq(struct irdma_sc_dev *dev, u32 ceq_id, u32 idx) +{ + u32 reg_val; + + reg_val = (ceq_id << I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT); + reg_val |= (QUEUE_TYPE_CEQ << I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT); + wr32(dev->hw, I40E_PFINT_LNKLSTN(idx - 1), reg_val); + + reg_val = (0x3 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT); + reg_val |= I40E_PFINT_DYN_CTLN_INTENA_MASK; + wr32(dev->hw, I40E_PFINT_DYN_CTLN(idx - 1), reg_val); + + reg_val = (IRDMA_GLINT_CEQCTL_CAUSE_ENA_M | + (idx << IRDMA_GLINT_CEQCTL_MSIX_INDX_S) | + IRDMA_GLINT_CEQCTL_ITR_INDX_M); + reg_val |= (NULL_QUEUE_INDEX << I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT); + + wr32(dev->hw, i40iw_regs[IRDMA_GLINT_CEQCTL] + 4 * ceq_id, reg_val); +} + +/** + * i40iw_ena_irq - Enable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void i40iw_ena_irq(struct irdma_sc_dev *dev, u32 idx) +{ + u32 val; + + val = IRDMA_GLINT_DYN_CTL_INTENA_M | IRDMA_GLINT_DYN_CTL_CLEARPBA_M | + IRDMA_GLINT_DYN_CTL_ITR_INDX_M; + wr32(dev->hw, i40iw_regs[IRDMA_GLINT_DYN_CTL] + 4 * (idx - 1), val); +} + +/** + * irdma_disable_irq - Disable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void i40iw_disable_irq(struct irdma_sc_dev *dev, u32 idx) +{ + wr32(dev->hw, i40iw_regs[IRDMA_GLINT_DYN_CTL] + 4 * (idx - 1), 0); +} + +void i40iw_init_hw(struct irdma_sc_dev *dev) +{ + int i; + u8 __iomem *hw_addr; + + for (i = 0; i < IRDMA_MAX_REGS; ++i) { + hw_addr = dev->hw->hw_addr; + + if (i == IRDMA_DB_ADDR_OFFSET) + hw_addr = NULL; + + dev->hw_regs[i] = (u32 __iomem *)(i40iw_regs[i] + hw_addr); + } + + for (i = 0; i < IRDMA_HW_STAT_INDEX_MAX_32; ++i) + dev->hw_stats_regs_32[i] = i40iw_stat_offsets_32[i]; + + for (i = 0; i < IRDMA_HW_STAT_INDEX_MAX_64; ++i) + dev->hw_stats_regs_64[i] = i40iw_stat_offsets_64[i]; + + for (i = 0; i < IRDMA_MAX_SHIFTS; ++i) + dev->hw_shifts[i] = i40iw_shifts[i]; + + for (i = 0; i < IRDMA_MAX_MASKS; ++i) + dev->hw_masks[i] = i40iw_masks[i]; + + dev->wqe_alloc_db = dev->hw_regs[IRDMA_WQEALLOC]; + dev->cq_arm_db = dev->hw_regs[IRDMA_CQARM]; + dev->aeq_alloc_db = dev->hw_regs[IRDMA_AEQALLOC]; + dev->cqp_db = dev->hw_regs[IRDMA_CQPDB]; + dev->cq_ack_db = dev->hw_regs[IRDMA_CQACK]; + dev->ceq_itr_mask_db = NULL; + dev->aeq_itr_mask_db = NULL; + + memcpy(&i40iw_irq_ops, dev->irq_ops, sizeof(i40iw_irq_ops)); + i40iw_irq_ops.irdma_en_irq = i40iw_ena_irq; + i40iw_irq_ops.irdma_dis_irq = i40iw_disable_irq; + i40iw_irq_ops.irdma_cfg_ceq = i40iw_config_ceq; + dev->irq_ops = &i40iw_irq_ops; + + /* Setup the hardware limits, hmc may limit further */ + dev->hw_attrs.uk_attrs.max_hw_wq_frags = I40IW_MAX_WQ_FRAGMENT_COUNT; + dev->hw_attrs.uk_attrs.max_hw_read_sges = I40IW_MAX_SGE_RD; + dev->hw_attrs.max_hw_device_pages = I40IW_MAX_PUSH_PAGE_COUNT; + dev->hw_attrs.first_hw_vf_fpm_id = I40IW_FIRST_VF_FPM_ID; + dev->hw_attrs.uk_attrs.max_hw_inline = I40IW_MAX_INLINE_DATA_SIZE; + dev->hw_attrs.max_hw_ird = I40IW_MAX_IRD_SIZE; + dev->hw_attrs.max_hw_ord = I40IW_MAX_ORD_SIZE; + dev->hw_attrs.max_hw_wqes = I40IW_MAX_WQ_ENTRIES; + dev->hw_attrs.uk_attrs.max_hw_rq_quanta = I40IW_QP_SW_MAX_RQ_QUANTA; + dev->hw_attrs.uk_attrs.max_hw_wq_quanta = I40IW_QP_SW_MAX_WQ_QUANTA; + dev->hw_attrs.uk_attrs.max_hw_sq_chunk = I40IW_MAX_QUANTA_PER_WR; + dev->hw_attrs.max_hw_pds = I40IW_MAX_PDS; + dev->hw_attrs.max_stat_inst = I40IW_MAX_STATS_COUNT; + dev->hw_attrs.max_hw_outbound_msg_size = I40IW_MAX_OUTBOUND_MSG_SIZE; + dev->hw_attrs.max_hw_inbound_msg_size = I40IW_MAX_INBOUND_MSG_SIZE; + dev->hw_attrs.max_qp_wr = I40IW_MAX_QP_WRS; +} diff --git a/drivers/infiniband/hw/irdma/i40iw_hw.h b/drivers/infiniband/hw/irdma/i40iw_hw.h new file mode 100644 index 000000000000..058b25211d4a --- /dev/null +++ b/drivers/infiniband/hw/irdma/i40iw_hw.h @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef I40IW_HW_H +#define I40IW_HW_H +#define I40E_VFPE_CQPTAIL1 0x0000A000 /* Reset: VFR */ +#define I40E_VFPE_CQPDB1 0x0000BC00 /* Reset: VFR */ +#define I40E_VFPE_CCQPSTATUS1 0x0000B800 /* Reset: VFR */ +#define I40E_VFPE_CCQPHIGH1 0x00009800 /* Reset: VFR */ +#define I40E_VFPE_CCQPLOW1 0x0000AC00 /* Reset: VFR */ +#define I40E_VFPE_CQARM1 0x0000B400 /* Reset: VFR */ +#define I40E_VFPE_CQACK1 0x0000B000 /* Reset: VFR */ +#define I40E_VFPE_AEQALLOC1 0x0000A400 /* Reset: VFR */ +#define I40E_VFPE_CQPERRCODES1 0x00009C00 /* Reset: VFR */ +#define I40E_VFPE_WQEALLOC1 0x0000C000 /* Reset: VFR */ +#define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ + +#define I40E_PFPE_CQPTAIL 0x00008080 /* Reset: PFR */ + +#define I40E_PFPE_CQPDB 0x00008000 /* Reset: PFR */ +#define I40E_PFPE_CCQPSTATUS 0x00008100 /* Reset: PFR */ +#define I40E_PFPE_CCQPHIGH 0x00008200 /* Reset: PFR */ +#define I40E_PFPE_CCQPLOW 0x00008180 /* Reset: PFR */ +#define I40E_PFPE_CQARM 0x00131080 /* Reset: PFR */ +#define I40E_PFPE_CQACK 0x00131100 /* Reset: PFR */ +#define I40E_PFPE_AEQALLOC 0x00131180 /* Reset: PFR */ +#define I40E_PFPE_CQPERRCODES 0x00008880 /* Reset: PFR */ +#define I40E_PFPE_WQEALLOC 0x00138C00 /* Reset: PFR */ +#define I40E_GLPCI_LBARCTRL 0x000BE484 /* Reset: POR */ +#define I40E_GLPE_CPUSTATUS0 0x0000D040 /* Reset: PE_CORER */ +#define I40E_GLPE_CPUSTATUS1 0x0000D044 /* Reset: PE_CORER */ +#define I40E_GLPE_CPUSTATUS2 0x0000D048 /* Reset: PE_CORER */ +#define I40E_PFHMC_PDINV 0x000C0300 /* Reset: PFR */ +#define I40E_GLHMC_VFPDINV(_i) (0x000C8300 + ((_i) * 4)) /* _i=0...31 */ /* Reset: CORER */ +#define I40E_PFINT_DYN_CTLN(_INTPF) (0x00034800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_AEQCTL 0x00038700 /* Reset: CORER */ + +#define I40E_GLPES_PFIP4RXDISCARD(_i) (0x00010600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXTRUNC(_i) (0x00010700 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXNOROUTE(_i) (0x00012E00 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXDISCARD(_i) (0x00011200 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXTRUNC(_i) (0x00011300 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ + +#define I40E_GLPES_PFRDMAVBNDLO(_i) (0x00014800 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXMCOCTSLO(_i) (0x00012000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXMCOCTSLO(_i) (0x00011600 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6TXMCOCTSLO(_i) (0x00012A00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFUDPRXPKTSLO(_i) (0x00013800 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFUDPTXPKTSLO(_i) (0x00013A00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ + +#define I40E_GLPES_PFIP6TXNOROUTE(_i) (0x00012F00 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFTCPRTXSEG(_i) (0x00013600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFTCPRXOPTERR(_i) (0x00013200 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFTCPRXPROTOERR(_i) (0x00013300 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRXVLANERR(_i) (0x00010000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXOCTSLO(_i) (0x00010200 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXPKTSLO(_i) (0x00010400 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXFRAGSLO(_i) (0x00010800 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXMCPKTSLO(_i) (0x00010C00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXOCTSLO(_i) (0x00011A00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXPKTSLO(_i) (0x00011C00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXFRAGSLO(_i) (0x00011E00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4TXMCPKTSLO(_i) (0x00012200 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXOCTSLO(_i) (0x00010E00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXPKTSLO(_i) (0x00011000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXFRAGSLO(_i) (0x00011400 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6TXOCTSLO(_i) (0x00012400 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6TXPKTSLO(_i) (0x00012600 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6TXFRAGSLO(_i) (0x00012800 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6TXMCPKTSLO(_i) (0x00012C00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFTCPTXSEGLO(_i) (0x00013400 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMARXRDSLO(_i) (0x00013E00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMARXSNDSLO(_i) (0x00014000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMARXWRSLO(_i) (0x00013C00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMATXRDSLO(_i) (0x00014400 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMATXSNDSLO(_i) (0x00014600 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMATXWRSLO(_i) (0x00014200 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP4RXMCOCTSLO(_i) (0x00010A00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFIP6RXMCPKTSLO(_i) (0x00011800 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFTCPRXSEGSLO(_i) (0x00013000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ +#define I40E_GLPES_PFRDMAVINVLO(_i) (0x00014A00 + ((_i) * 8)) /* _i=0...15 */ /* Reset: PE_CORER */ + +#define I40IW_DB_ADDR_OFFSET (4 * 1024 * 1024 - 64 * 1024) + +#define I40IW_VF_DB_ADDR_OFFSET (64 * 1024) + +#define I40E_PFINT_LNKLSTN(_INTPF) (0x00035000 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_LNKLSTN_MAX_INDEX 511 +#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT 0 +#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK (0x7FF << I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT) +#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT 11 +#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_MASK (0x3 << I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT) + +#define I40E_PFINT_CEQCTL(_INTPF) (0x00036800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: CORER */ +#define I40E_PFINT_CEQCTL_MAX_INDEX 511 +#define I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT 0 +#define I40E_PFINT_CEQCTL_MSIX_INDX_MASK (0xFF << I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_ITR_INDX_SHIFT 11 +#define I40E_PFINT_CEQCTL_ITR_INDX_MASK (0x3 << I40E_PFINT_CEQCTL_ITR_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_PFINT_CEQCTL_MSIX0_INDX_MASK (0x7 << I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT 16 +#define I40E_PFINT_CEQCTL_NEXTQ_INDX_MASK (0x7FF << I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT 27 +#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_MASK (0x3 << I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT) +#define I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_PFINT_CEQCTL_CAUSE_ENA_MASK (0x1 << I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT) +#define I40E_PFINT_CEQCTL_INTEVENT_SHIFT 31 +#define I40E_PFINT_CEQCTL_INTEVENT_MASK (0x1 << I40E_PFINT_CEQCTL_INTEVENT_SHIFT) + +#define I40E_CQPSQ_STAG_PDID_SHIFT 48 +#define I40E_CQPSQ_STAG_PDID_MASK (0x7FFFULL << I40E_CQPSQ_STAG_PDID_SHIFT) + +#define I40E_PFPE_CCQPSTATUS_CCQP_DONE_SHIFT 0 +#define I40E_PFPE_CCQPSTATUS_CCQP_DONE_MASK (0x1ULL << I40E_PFPE_CCQPSTATUS_CCQP_DONE_SHIFT) + +#define I40E_PFPE_CCQPSTATUS_CCQP_ERR_SHIFT 31 +#define I40E_PFPE_CCQPSTATUS_CCQP_ERR_MASK (0x1ULL << I40E_PFPE_CCQPSTATUS_CCQP_ERR_SHIFT) + +#define I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT 3 +#define I40E_PFINT_DYN_CTLN_ITR_INDX_MASK (0x3 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) + +#define I40E_PFINT_DYN_CTLN_INTENA_SHIFT 0 +#define I40E_PFINT_DYN_CTLN_INTENA_MASK (0x1 << I40E_PFINT_DYN_CTLN_INTENA_SHIFT) + +#define I40E_CQPSQ_CQ_CEQID_SHIFT 24 +#define I40E_CQPSQ_CQ_CEQID_MASK (0x7fUL << I40E_CQPSQ_CQ_CEQID_SHIFT) + +#define I40E_VSIQF_CTL(_VSI) (0x0020D800 + ((_VSI) * 4)) + +enum i40iw_device_caps_const { + I40IW_MAX_WQ_FRAGMENT_COUNT = 3, + I40IW_MAX_SGE_RD = 1, + I40IW_MAX_PUSH_PAGE_COUNT = 0, + I40IW_MAX_INLINE_DATA_SIZE = 48, + I40IW_MAX_IRD_SIZE = 63, + I40IW_MAX_ORD_SIZE = 127, + I40IW_MAX_WQ_ENTRIES = 2048, + I40IW_MAX_WQE_SIZE_RQ = 128, + I40IW_MAX_PDS = 32768, + I40IW_MAX_STATS_COUNT = 16, + I40IW_MAX_CQ_SIZE = 1048575, + I40IW_MAX_OUTBOUND_MSG_SIZE = 2147483647, + I40IW_MAX_INBOUND_MSG_SIZE = 2147483647, +}; + +#define I40IW_QP_WQE_MIN_SIZE 32 +#define I40IW_QP_WQE_MAX_SIZE 128 +#define I40IW_QP_SW_MIN_WQSIZE 4 + +#define I40IW_MAX_RQ_WQE_SHIFT 2 +#define I40IW_MAX_QUANTA_PER_WR 2 + +#define I40IW_QP_SW_MAX_SQ_QUANTA 2048 +#define I40IW_QP_SW_MAX_RQ_QUANTA 16384 +#define I40IW_QP_SW_MAX_WQ_QUANTA 2048 +#define I40IW_MAX_QP_WRS ((I40IW_QP_SW_MAX_SQ_QUANTA - IRDMA_SQ_RSVD) / I40IW_MAX_QUANTA_PER_WR) +#define I40IW_FIRST_VF_FPM_ID 16 +#define QUEUE_TYPE_CEQ 2 +#define NULL_QUEUE_INDEX 0x7FF + +void i40iw_init_hw(struct irdma_sc_dev *dev); +#endif /* I40IW_HW_H */ diff --git a/drivers/infiniband/hw/irdma/icrdma_hw.c b/drivers/infiniband/hw/irdma/icrdma_hw.c new file mode 100644 index 000000000000..90ceb9c29235 --- /dev/null +++ b/drivers/infiniband/hw/irdma/icrdma_hw.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2019 Intel Corporation */ +#include "osdep.h" +#include "type.h" +#include "icrdma_hw.h" + +static u32 icrdma_regs[IRDMA_MAX_REGS] = { + PFPE_CQPTAIL, + PFPE_CQPDB, + PFPE_CCQPSTATUS, + PFPE_CCQPHIGH, + PFPE_CCQPLOW, + PFPE_CQARM, + PFPE_CQACK, + PFPE_AEQALLOC, + PFPE_CQPERRCODES, + PFPE_WQEALLOC, + GLINT_DYN_CTL(0), + ICRDMA_DB_ADDR_OFFSET, + + GLPCI_LBARCTRL, + GLPE_CPUSTATUS0, + GLPE_CPUSTATUS1, + GLPE_CPUSTATUS2, + PFINT_AEQCTL, + GLINT_CEQCTL(0), + VSIQF_PE_CTL1(0), + PFHMC_PDINV, + GLHMC_VFPDINV(0) +}; + +static u64 icrdma_masks[IRDMA_MAX_MASKS] = { + ICRDMA_CCQPSTATUS_CCQP_DONE_M, + ICRDMA_CCQPSTATUS_CCQP_ERR_M, + ICRDMA_CQPSQ_STAG_PDID_M, + ICRDMA_CQPSQ_CQ_CEQID_M, + ICRDMA_CQPSQ_CQ_CQID_M, +}; + +static u64 icrdma_shifts[IRDMA_MAX_SHIFTS] = { + ICRDMA_CCQPSTATUS_CCQP_DONE_S, + ICRDMA_CCQPSTATUS_CCQP_ERR_S, + ICRDMA_CQPSQ_STAG_PDID_S, + ICRDMA_CQPSQ_CQ_CEQID_S, + ICRDMA_CQPSQ_CQ_CQID_S, +}; + +void icrdma_init_hw(struct irdma_sc_dev *dev) +{ + int i; + u8 __iomem *hw_addr; + + for (i = 0; i < IRDMA_MAX_REGS; ++i) { + hw_addr = dev->hw->hw_addr; + + if (i == IRDMA_DB_ADDR_OFFSET) + hw_addr = NULL; + + dev->hw_regs[i] = (u32 __iomem *)(hw_addr + icrdma_regs[i]); + } + + for (i = 0; i < IRDMA_MAX_SHIFTS; ++i) + dev->hw_shifts[i] = icrdma_shifts[i]; + + for (i = 0; i < IRDMA_MAX_MASKS; ++i) + dev->hw_masks[i] = icrdma_masks[i]; + + dev->wqe_alloc_db = dev->hw_regs[IRDMA_WQEALLOC]; + dev->cq_arm_db = dev->hw_regs[IRDMA_CQARM]; + dev->aeq_alloc_db = dev->hw_regs[IRDMA_AEQALLOC]; + dev->cqp_db = dev->hw_regs[IRDMA_CQPDB]; + dev->cq_ack_db = dev->hw_regs[IRDMA_CQACK]; + dev->hw_attrs.max_stat_inst = ICRDMA_MAX_STATS_COUNT; + + dev->hw_attrs.uk_attrs.max_hw_sq_chunk = IRDMA_MAX_QUANTA_PER_WR; +} diff --git a/drivers/infiniband/hw/irdma/icrdma_hw.h b/drivers/infiniband/hw/irdma/icrdma_hw.h new file mode 100644 index 000000000000..7eb7cbdcfb73 --- /dev/null +++ b/drivers/infiniband/hw/irdma/icrdma_hw.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#ifndef ICRDMA_HW_H +#define ICRDMA_HW_H + +#define VFPE_CQPTAIL1 0x0000a000 +#define VFPE_CQPDB1 0x0000bc00 +#define VFPE_CCQPSTATUS1 0x0000b800 +#define VFPE_CCQPHIGH1 0x00009800 +#define VFPE_CCQPLOW1 0x0000ac00 +#define VFPE_CQARM1 0x0000b400 +#define VFPE_CQARM1 0x0000b400 +#define VFPE_CQACK1 0x0000b000 +#define VFPE_AEQALLOC1 0x0000a400 +#define VFPE_CQPERRCODES1 0x00009c00 +#define VFPE_WQEALLOC1 0x0000c000 +#define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4)) /* _i=0...63 */ + +#define PFPE_CQPTAIL 0x00500880 +#define PFPE_CQPDB 0x00500800 +#define PFPE_CCQPSTATUS 0x0050a000 +#define PFPE_CCQPHIGH 0x0050a100 +#define PFPE_CCQPLOW 0x0050a080 +#define PFPE_CQARM 0x00502c00 +#define PFPE_CQACK 0x00502c80 +#define PFPE_AEQALLOC 0x00502d00 +#define GLINT_DYN_CTL(_INT) (0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ +#define GLPCI_LBARCTRL 0x0009de74 +#define GLPE_CPUSTATUS0 0x0050ba5c +#define GLPE_CPUSTATUS1 0x0050ba60 +#define GLPE_CPUSTATUS2 0x0050ba64 +#define PFINT_AEQCTL 0x0016cb00 +#define PFPE_CQPERRCODES 0x0050a200 +#define PFPE_WQEALLOC 0x00504400 +#define GLINT_CEQCTL(_INT) (0x0015c000 + ((_INT) * 4)) /* _i=0...2047 */ +#define VSIQF_PE_CTL1(_VSI) (0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ +#define PFHMC_PDINV 0x00520300 +#define GLHMC_VFPDINV(_i) (0x00528300 + ((_i) * 4)) /* _i=0...31 */ + +#define ICRDMA_DB_ADDR_OFFSET (8 * 1024 * 1024 - 64 * 1024) + +#define ICRDMA_VF_DB_ADDR_OFFSET (64 * 1024) + +/* CCQSTATUS */ +#define ICRDMA_CCQPSTATUS_CCQP_DONE_S 0 +#define ICRDMA_CCQPSTATUS_CCQP_DONE_M (0x1ULL << ICRDMA_CCQPSTATUS_CCQP_DONE_S) +#define ICRDMA_CCQPSTATUS_CCQP_ERR_S 31 +#define ICRDMA_CCQPSTATUS_CCQP_ERR_M (0x1ULL << ICRDMA_CCQPSTATUS_CCQP_ERR_S) +#define ICRDMA_CQPSQ_STAG_PDID_S 46 +#define ICRDMA_CQPSQ_STAG_PDID_M (0x3ffffULL << ICRDMA_CQPSQ_STAG_PDID_S) +#define ICRDMA_CQPSQ_CQ_CEQID_S 22 +#define ICRDMA_CQPSQ_CQ_CEQID_M (0x3ffULL << ICRDMA_CQPSQ_CQ_CEQID_S) +#define ICRDMA_CQPSQ_CQ_CQID_S 0 +#define ICRDMA_CQPSQ_CQ_CQID_M \ + (0x7ffffULL << ICRDMA_CQPSQ_CQ_CQID_S) + +enum icrdma_device_caps_const { + ICRDMA_MAX_STATS_COUNT = 128, +}; + +void icrdma_init_hw(struct irdma_sc_dev *dev); +#endif /* ICRDMA_HW_H*/ From patchwork Fri Apr 17 17:12:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0666C38A2B for ; Fri, 17 Apr 2020 17:19:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B8032208E4 for ; Fri, 17 Apr 2020 17:19:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729953AbgDQRTH (ORCPT ); Fri, 17 Apr 2020 13:19:07 -0400 Received: from mga12.intel.com ([192.55.52.136]:38151 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728740AbgDQRTD (ORCPT ); Fri, 17 Apr 2020 13:19:03 -0400 IronPort-SDR: BBLCvMjcxUik9p7YFdk8ya4O30PpjsOQf03vThpvWzQnpl/PcxwdOaFxBQLOcGsMi+TOtAwMOd DUZ4yOPlPynQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:54 -0700 IronPort-SDR: WXDDpZsOjxf+eeMcV96ZbHWB97rw22A6ET/rkoxChcdoEzSsegJ6z/deFjNcsovaf/pCJcwEjw AUGOu2gpVQAQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383705" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:53 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 03/16] RDMA/irdma: Implement HW Admin Queue OPs Date: Fri, 17 Apr 2020 10:12:38 -0700 Message-Id: <20200417171251.1533371-4-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail The driver posts privileged commands to the HW Admin Queue (Control QP or CQP) to request administrative actions from the HW. Implement create/destroy of CQP and the supporting functions, data structures and headers to handle the different CQP commands Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/ctrl.c | 5985 +++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/defs.h | 2132 ++++++++++ drivers/infiniband/hw/irdma/irdma.h | 190 + drivers/infiniband/hw/irdma/type.h | 1714 ++++++++ 4 files changed, 10021 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/ctrl.c create mode 100644 drivers/infiniband/hw/irdma/defs.h create mode 100644 drivers/infiniband/hw/irdma/irdma.h create mode 100644 drivers/infiniband/hw/irdma/type.h diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c new file mode 100644 index 000000000000..46db672548c1 --- /dev/null +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -0,0 +1,5985 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include "osdep.h" +#include "status.h" +#include "hmc.h" +#include "defs.h" +#include "type.h" +#include "ws.h" +#include "protos.h" + +/** + * irdma_get_qp_from_list - get next qp from a list + * @head: Listhead of qp's + * @qp: current qp + */ +struct irdma_sc_qp *irdma_get_qp_from_list(struct list_head *head, + struct irdma_sc_qp *qp) +{ + struct list_head *lastentry; + struct list_head *entry = NULL; + + if (list_empty(head)) + return NULL; + + if (!qp) { + entry = head->next; + } else { + lastentry = &qp->list; + entry = lastentry->next; + if (entry == head) + return NULL; + } + + return container_of(entry, struct irdma_sc_qp, list); +} + +/** + * irdma_sc_qp_suspend_resume - suspend/resume all qp's on VSI + * @vsi: the VSI struct pointer + * @op: Set to IRDMA_OP_RESUME or IRDMA_OP_SUSPEND + */ +void irdma_sc_suspend_resume_qps(struct irdma_sc_vsi *vsi, u8 op) +{ + struct irdma_sc_qp *qp = NULL; + unsigned long flags; + int i; + + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + spin_lock_irqsave(&vsi->qos[i].lock, flags); + qp = irdma_get_qp_from_list(&vsi->qos[i].qplist, qp); + while (qp) { + if (op == IRDMA_OP_RESUME) { + if (!irdma_ws_add(vsi, i)) { + qp->qs_handle = + vsi->qos[qp->user_pri].qs_handle; + irdma_cqp_qp_suspend_resume(qp, op); + } else { + irdma_cqp_qp_suspend_resume(qp, op); + irdma_modify_qp_to_err(qp); + } + } else if (op == IRDMA_OP_SUSPEND) { + /* issue cqp suspend command */ + if (!irdma_cqp_qp_suspend_resume(qp, op)) + atomic_inc(&vsi->qp_suspend_reqs); + } + qp = irdma_get_qp_from_list(&vsi->qos[i].qplist, qp); + } + spin_unlock_irqrestore(&vsi->qos[i].lock, flags); + } +} + +/** + * irdma_change_l2params - given the new l2 parameters, change all qp + * @vsi: RDMA VSI pointer + * @l2params: New parameters from l2 + */ +void irdma_change_l2params(struct irdma_sc_vsi *vsi, + struct irdma_l2params *l2params) +{ + if (l2params->mtu_changed) { + vsi->mtu = l2params->mtu; + irdma_reinitialize_ieq(vsi); + } + + if (!l2params->tc_changed) + return; + + vsi->tc_change_pending = false; + irdma_sc_suspend_resume_qps(vsi, IRDMA_OP_RESUME); +} + +/** + * irdma_qp_rem_qos - remove qp from qos lists during destroy qp + * @qp: qp to be removed from qos + */ +void irdma_qp_rem_qos(struct irdma_sc_qp *qp) +{ + struct irdma_sc_vsi *vsi = qp->vsi; + unsigned long flags; + + if (!qp->on_qoslist) + return; + + spin_lock_irqsave(&vsi->qos[qp->user_pri].lock, flags); + qp->on_qoslist = false; + list_del(&qp->list); + spin_unlock_irqrestore(&vsi->qos[qp->user_pri].lock, flags); + dev_dbg(rfdev_to_dev(qp->dev), + "DCB: DCB: Remove qp[%d] UP[%d] qset[%d]\n", qp->qp_uk.qp_id, + qp->user_pri, qp->qs_handle); +} + +/** + * irdma_qp_add_qos - called during setctx for qp to be added to qos + * @qp: qp to be added to qos + */ +void irdma_qp_add_qos(struct irdma_sc_qp *qp) +{ + struct irdma_sc_vsi *vsi = qp->vsi; + unsigned long flags; + + if (qp->on_qoslist) + return; + + spin_lock_irqsave(&vsi->qos[qp->user_pri].lock, flags); + list_add(&qp->list, &vsi->qos[qp->user_pri].qplist); + qp->on_qoslist = true; + qp->qs_handle = vsi->qos[qp->user_pri].qs_handle; + spin_unlock_irqrestore(&vsi->qos[qp->user_pri].lock, flags); + dev_dbg(rfdev_to_dev(qp->dev), + "DCB: DCB: Add qp[%d] UP[%d] qset[%d]\n", qp->qp_uk.qp_id, + qp->user_pri, qp->qs_handle); +} + +/** + * irdma_sc_pd_init - initialize sc pd struct + * @dev: sc device struct + * @pd: sc pd ptr + * @pd_id: pd_id for allocated pd + * @abi_ver: ABI version from user context, -1 if not valid + */ +static void irdma_sc_pd_init(struct irdma_sc_dev *dev, struct irdma_sc_pd *pd, + u32 pd_id, int abi_ver) +{ + pd->pd_id = pd_id; + pd->abi_ver = abi_ver; + pd->dev = dev; +} + +/** + * irdma_sc_add_arp_cache_entry - cqp wqe add arp cache entry + * @cqp: struct for cqp hw + * @info: arp entry information + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_add_arp_cache_entry(struct irdma_sc_cqp *cqp, + struct irdma_add_arp_cache_entry_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 temp, hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + set_64bit_val(wqe, 8, info->reach_max); + + temp = info->mac_addr[5] | LS_64_1(info->mac_addr[4], 8) | + LS_64_1(info->mac_addr[3], 16) | LS_64_1(info->mac_addr[2], 24) | + LS_64_1(info->mac_addr[1], 32) | LS_64_1(info->mac_addr[0], 40); + set_64bit_val(wqe, 16, temp); + + hdr = info->arp_index | + LS_64(IRDMA_CQP_OP_MANAGE_ARP, IRDMA_CQPSQ_OPCODE) | + LS_64((info->permanent ? 1 : 0), IRDMA_CQPSQ_MAT_PERMANENT) | + LS_64(1, IRDMA_CQPSQ_MAT_ENTRYVALID) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "ARP_CACHE_ENTRY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_del_arp_cache_entry - dele arp cache entry + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @arp_index: arp index to delete arp entry + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_del_arp_cache_entry(struct irdma_sc_cqp *cqp, u64 scratch, + u16 arp_index, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + hdr = arp_index | LS_64(IRDMA_CQP_OP_MANAGE_ARP, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "ARP_CACHE_DEL_ENTRY WQE", + wqe, IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_query_arp_cache_entry - cqp wqe to query arp and arp index + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @arp_index: arp index to delete arp entry + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_query_arp_cache_entry(struct irdma_sc_cqp *cqp, u64 scratch, + u16 arp_index, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + hdr = arp_index | LS_64(IRDMA_CQP_OP_MANAGE_ARP, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_MAT_QUERY) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QUERY_ARP_CACHE_ENTRY WQE", + wqe, IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_manage_apbvt_entry - for adding and deleting apbvt entries + * @cqp: struct for cqp hw + * @info: info for apbvt entry to add or delete + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_manage_apbvt_entry(struct irdma_sc_cqp *cqp, + struct irdma_apbvt_info *info, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, info->port); + + hdr = LS_64(IRDMA_CQP_OP_MANAGE_APBVT, IRDMA_CQPSQ_OPCODE) | + LS_64(info->add, IRDMA_CQPSQ_MAPT_ADDPORT) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_APBVT WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_manage_qhash_table_entry - manage quad hash entries + * @cqp: struct for cqp hw + * @info: info for quad hash to manage + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + * + * This is called before connection establishment is started. + * For passive connections, when listener is created, it will + * call with entry type of IRDMA_QHASH_TYPE_TCP_SYN with local + * ip address and tcp port. When SYN is received (passive + * connections) or sent (active connections), this routine is + * called with entry type of IRDMA_QHASH_TYPE_TCP_ESTABLISHED + * and quad is passed in info. + * + * When iwarp connection is done and its state moves to RTS, the + * quad hash entry in the hardware will point to iwarp's qp + * number and requires no calls from the driver. + */ +static enum irdma_status_code +irdma_sc_manage_qhash_table_entry(struct irdma_sc_cqp *cqp, + struct irdma_qhash_table_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 qw1 = 0; + u64 qw2 = 0; + u64 temp; + struct irdma_sc_vsi *vsi = info->vsi; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + temp = info->mac_addr[5] | LS_64_1(info->mac_addr[4], 8) | + LS_64_1(info->mac_addr[3], 16) | LS_64_1(info->mac_addr[2], 24) | + LS_64_1(info->mac_addr[1], 32) | LS_64_1(info->mac_addr[0], 40); + set_64bit_val(wqe, 0, temp); + + qw1 = LS_64(info->qp_num, IRDMA_CQPSQ_QHASH_QPN) | + LS_64(info->dest_port, IRDMA_CQPSQ_QHASH_DEST_PORT); + if (info->ipv4_valid) { + set_64bit_val(wqe, 48, + LS_64(info->dest_ip[0], IRDMA_CQPSQ_QHASH_ADDR3)); + } else { + set_64bit_val(wqe, 56, + LS_64(info->dest_ip[0], IRDMA_CQPSQ_QHASH_ADDR0) | + LS_64(info->dest_ip[1], IRDMA_CQPSQ_QHASH_ADDR1)); + + set_64bit_val(wqe, 48, + LS_64(info->dest_ip[2], IRDMA_CQPSQ_QHASH_ADDR2) | + LS_64(info->dest_ip[3], IRDMA_CQPSQ_QHASH_ADDR3)); + } + qw2 = LS_64(vsi->qos[info->user_pri].qs_handle, + IRDMA_CQPSQ_QHASH_QS_HANDLE); + if (info->vlan_valid) + qw2 |= LS_64(info->vlan_id, IRDMA_CQPSQ_QHASH_VLANID); + set_64bit_val(wqe, 16, qw2); + if (info->entry_type == IRDMA_QHASH_TYPE_TCP_ESTABLISHED) { + qw1 |= LS_64(info->src_port, IRDMA_CQPSQ_QHASH_SRC_PORT); + if (!info->ipv4_valid) { + set_64bit_val(wqe, 40, + LS_64(info->src_ip[0], IRDMA_CQPSQ_QHASH_ADDR0) | + LS_64(info->src_ip[1], IRDMA_CQPSQ_QHASH_ADDR1)); + set_64bit_val(wqe, 32, + LS_64(info->src_ip[2], IRDMA_CQPSQ_QHASH_ADDR2) | + LS_64(info->src_ip[3], IRDMA_CQPSQ_QHASH_ADDR3)); + } else { + set_64bit_val(wqe, 32, + LS_64(info->src_ip[0], IRDMA_CQPSQ_QHASH_ADDR3)); + } + } + + set_64bit_val(wqe, 8, qw1); + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_QHASH_WQEVALID) | + LS_64(IRDMA_CQP_OP_MANAGE_QUAD_HASH_TABLE_ENTRY, + IRDMA_CQPSQ_QHASH_OPCODE) | + LS_64(info->manage, IRDMA_CQPSQ_QHASH_MANAGE) | + LS_64(info->ipv4_valid, IRDMA_CQPSQ_QHASH_IPV4VALID) | + LS_64(info->vlan_valid, IRDMA_CQPSQ_QHASH_VLANVALID) | + LS_64(info->entry_type, IRDMA_CQPSQ_QHASH_ENTRYTYPE); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_QHASH WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_cqp_nop - send a nop wqe + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_cqp_nop(struct irdma_sc_cqp *cqp, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + hdr = LS_64(IRDMA_CQP_OP_NOP, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "NOP WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_qp_init - initialize qp + * @qp: sc qp + * @info: initialization qp info + */ +static enum irdma_status_code irdma_sc_qp_init(struct irdma_sc_qp *qp, + struct irdma_qp_init_info *info) +{ + enum irdma_status_code ret_code; + u32 pble_obj_cnt; + u16 wqe_size; + + if (info->qp_uk_init_info.max_sq_frag_cnt > + info->pd->dev->hw_attrs.uk_attrs.max_hw_wq_frags || + info->qp_uk_init_info.max_rq_frag_cnt > + info->pd->dev->hw_attrs.uk_attrs.max_hw_wq_frags) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + qp->dev = info->pd->dev; + qp->vsi = info->vsi; + qp->ieq_qp = info->vsi->exception_lan_q; + qp->sq_pa = info->sq_pa; + qp->rq_pa = info->rq_pa; + qp->hw_host_ctx_pa = info->host_ctx_pa; + qp->q2_pa = info->q2_pa; + qp->shadow_area_pa = info->shadow_area_pa; + qp->q2_buf = info->q2; + qp->pd = info->pd; + qp->hw_host_ctx = info->host_ctx; + info->qp_uk_init_info.wqe_alloc_db = qp->pd->dev->wqe_alloc_db; + ret_code = irdma_qp_uk_init(&qp->qp_uk, &info->qp_uk_init_info); + if (ret_code) + return ret_code; + + qp->virtual_map = info->virtual_map; + pble_obj_cnt = info->pd->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + + if ((info->virtual_map && info->sq_pa >= pble_obj_cnt) || + (info->virtual_map && info->rq_pa >= pble_obj_cnt)) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + qp->llp_stream_handle = (void *)(-1); + qp->qp_type = info->type ? info->type : IRDMA_QP_TYPE_IWARP; + qp->hw_sq_size = irdma_get_encoded_wqe_size(qp->qp_uk.sq_ring.size, false); + dev_dbg(rfdev_to_dev(qp->dev), + "WQE: hw_sq_size[%04d] sq_ring.size[%04d]\n", qp->hw_sq_size, + qp->qp_uk.sq_ring.size); + if (qp->qp_uk.uk_attrs->hw_rev == IRDMA_GEN_1 && qp->pd->abi_ver > 4) + wqe_size = IRDMA_WQE_SIZE_128; + else + ret_code = irdma_fragcnt_to_wqesize_rq(qp->qp_uk.max_rq_frag_cnt, + &wqe_size); + if (ret_code) + return ret_code; + + qp->hw_rq_size = irdma_get_encoded_wqe_size(qp->qp_uk.rq_size * + (wqe_size / IRDMA_QP_WQE_MIN_SIZE), false); + dev_dbg(rfdev_to_dev(qp->dev), + "WQE: hw_rq_size[%04d] qp_uk.rq_size[%04d] wqe_size[%04d]\n", + qp->hw_rq_size, qp->qp_uk.rq_size, wqe_size); + qp->sq_tph_val = info->sq_tph_val; + qp->rq_tph_val = info->rq_tph_val; + qp->sq_tph_en = info->sq_tph_en; + qp->rq_tph_en = info->rq_tph_en; + qp->rcv_tph_en = info->rcv_tph_en; + qp->xmit_tph_en = info->xmit_tph_en; + qp->qp_uk.first_sq_wq = info->qp_uk_init_info.first_sq_wq; + qp->qs_handle = qp->vsi->qos[qp->user_pri].qs_handle; + + return 0; +} + +/** + * irdma_sc_qp_create - create qp + * @qp: sc qp + * @info: qp create info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_qp_create(struct irdma_sc_qp *qp, struct irdma_create_qp_info *info, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = qp->dev->cqp; + if (qp->qp_uk.qp_id < cqp->dev->hw_attrs.min_hw_qp_id || + qp->qp_uk.qp_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt - 1)) + return IRDMA_ERR_INVALID_QP_ID; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, qp->hw_host_ctx_pa); + set_64bit_val(wqe, 40, qp->shadow_area_pa); + + hdr = qp->qp_uk.qp_id | + LS_64(IRDMA_CQP_OP_CREATE_QP, IRDMA_CQPSQ_OPCODE) | + LS_64((info->ord_valid ? 1 : 0), IRDMA_CQPSQ_QP_ORDVALID) | + LS_64(info->tcp_ctx_valid, IRDMA_CQPSQ_QP_TOECTXVALID) | + LS_64(info->mac_valid, IRDMA_CQPSQ_QP_MACVALID) | + LS_64(qp->qp_type, IRDMA_CQPSQ_QP_QPTYPE) | + LS_64(qp->virtual_map, IRDMA_CQPSQ_QP_VQ) | + LS_64(info->force_lpb, IRDMA_CQPSQ_QP_FORCELOOPBACK) | + LS_64(info->cq_num_valid, IRDMA_CQPSQ_QP_CQNUMVALID) | + LS_64(info->arp_cache_idx_valid, IRDMA_CQPSQ_QP_ARPTABIDXVALID) | + LS_64(info->next_iwarp_state, IRDMA_CQPSQ_QP_NEXTIWSTATE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QP_CREATE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_qp_modify - modify qp cqp wqe + * @qp: sc qp + * @info: modify qp info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_qp_modify(struct irdma_sc_qp *qp, struct irdma_modify_qp_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + u8 term_actions = 0; + u8 term_len = 0; + + cqp = qp->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + if (info->next_iwarp_state == IRDMA_QP_STATE_TERMINATE) { + if (info->dont_send_fin) + term_actions += IRDMAQP_TERM_SEND_TERM_ONLY; + if (info->dont_send_term) + term_actions += IRDMAQP_TERM_SEND_FIN_ONLY; + if (term_actions == IRDMAQP_TERM_SEND_TERM_AND_FIN || + term_actions == IRDMAQP_TERM_SEND_TERM_ONLY) + term_len = info->termlen; + } + + set_64bit_val(wqe, 8, + LS_64(info->new_mss, IRDMA_CQPSQ_QP_NEWMSS) | + LS_64(term_len, IRDMA_CQPSQ_QP_TERMLEN)); + set_64bit_val(wqe, 16, qp->hw_host_ctx_pa); + set_64bit_val(wqe, 40, qp->shadow_area_pa); + + hdr = qp->qp_uk.qp_id | + LS_64(IRDMA_CQP_OP_MODIFY_QP, IRDMA_CQPSQ_OPCODE) | + LS_64(info->ord_valid, IRDMA_CQPSQ_QP_ORDVALID) | + LS_64(info->tcp_ctx_valid, IRDMA_CQPSQ_QP_TOECTXVALID) | + LS_64(info->cached_var_valid, IRDMA_CQPSQ_QP_CACHEDVARVALID) | + LS_64(qp->virtual_map, IRDMA_CQPSQ_QP_VQ) | + LS_64(info->force_lpb, IRDMA_CQPSQ_QP_FORCELOOPBACK) | + LS_64(info->cq_num_valid, IRDMA_CQPSQ_QP_CQNUMVALID) | + LS_64(info->force_lpb, IRDMA_CQPSQ_QP_FORCELOOPBACK) | + LS_64(info->mac_valid, IRDMA_CQPSQ_QP_MACVALID) | + LS_64(qp->qp_type, IRDMA_CQPSQ_QP_QPTYPE) | + LS_64(info->mss_change, IRDMA_CQPSQ_QP_MSSCHANGE) | + LS_64(info->remove_hash_idx, IRDMA_CQPSQ_QP_REMOVEHASHENTRY) | + LS_64(term_actions, IRDMA_CQPSQ_QP_TERMACT) | + LS_64(info->reset_tcp_conn, IRDMA_CQPSQ_QP_RESETCON) | + LS_64(info->arp_cache_idx_valid, IRDMA_CQPSQ_QP_ARPTABIDXVALID) | + LS_64(info->next_iwarp_state, IRDMA_CQPSQ_QP_NEXTIWSTATE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QP_MODIFY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_qp_destroy - cqp destroy qp + * @qp: sc qp + * @scratch: u64 saved to be used during cqp completion + * @remove_hash_idx: flag if to remove hash idx + * @ignore_mw_bnd: memory window bind flag + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_qp_destroy(struct irdma_sc_qp *qp, u64 scratch, bool remove_hash_idx, + bool ignore_mw_bnd, bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + + irdma_qp_rem_qos(qp); + cqp = qp->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, qp->hw_host_ctx_pa); + set_64bit_val(wqe, 40, qp->shadow_area_pa); + + hdr = qp->qp_uk.qp_id | + LS_64(IRDMA_CQP_OP_DESTROY_QP, IRDMA_CQPSQ_OPCODE) | + LS_64(qp->qp_type, IRDMA_CQPSQ_QP_QPTYPE) | + LS_64(ignore_mw_bnd, IRDMA_CQPSQ_QP_IGNOREMWBOUND) | + LS_64(remove_hash_idx, IRDMA_CQPSQ_QP_REMOVEHASHENTRY) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QP_DESTROY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_qp_setctx_roce - set qp's context + * @qp: sc qp + * @qp_ctx: context ptr + * @info: ctx info + */ +static enum irdma_status_code +irdma_sc_qp_setctx_roce(struct irdma_sc_qp *qp, __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info) +{ + struct irdma_roce_offload_info *roce_info; + struct irdma_udp_offload_info *udp; + u8 push_mode_en; + u16 push_idx; + u64 mac; + + roce_info = info->roce_info; + udp = info->udp_info; + + mac = LS_64_1(roce_info->mac_addr[5], 16) | + LS_64_1(roce_info->mac_addr[4], 24) | + LS_64_1(roce_info->mac_addr[3], 32) | + LS_64_1(roce_info->mac_addr[2], 40) | + LS_64_1(roce_info->mac_addr[1], 48) | + LS_64_1(roce_info->mac_addr[0], 56); + + qp->user_pri = info->user_pri; + if (info->add_to_qoslist) + irdma_qp_add_qos(qp); + if (qp->push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX) { + push_mode_en = 0; + push_idx = 0; + } else { + push_mode_en = 1; + push_idx = qp->push_idx; + } + set_64bit_val(qp_ctx, 0, + LS_64(qp->qp_uk.rq_wqe_size, IRDMAQPC_RQWQESIZE) | + LS_64(qp->rcv_tph_en, IRDMAQPC_RCVTPHEN) | + LS_64(qp->xmit_tph_en, IRDMAQPC_XMITTPHEN) | + LS_64(qp->rq_tph_en, IRDMAQPC_RQTPHEN) | + LS_64(qp->sq_tph_en, IRDMAQPC_SQTPHEN) | + LS_64(push_idx, IRDMAQPC_PPIDX) | + LS_64(push_mode_en, IRDMAQPC_PMENA) | + LS_64(roce_info->pd_id >> 16, IRDMAQPC_PDIDXHI) | + LS_64(roce_info->dctcp_en, IRDMAQPC_DC_TCP_EN) | + LS_64(roce_info->err_rq_idx_valid, IRDMAQPC_ERR_RQ_IDX_VALID) | + LS_64(roce_info->is_qp1, IRDMAQPC_ISQP1) | + LS_64(roce_info->roce_tver, IRDMAQPC_ROCE_TVER) | + LS_64(udp->ipv4, IRDMAQPC_IPV4) | + LS_64(udp->insert_vlan_tag, IRDMAQPC_INSERTVLANTAG)); + set_64bit_val(qp_ctx, 8, qp->sq_pa); + set_64bit_val(qp_ctx, 16, qp->rq_pa); + if ((roce_info->dcqcn_en || roce_info->dctcp_en) && + !(udp->tos & 0x03)) + udp->tos |= ECN_CODE_PT_VAL; + set_64bit_val(qp_ctx, 24, + LS_64(qp->hw_rq_size, IRDMAQPC_RQSIZE) | + LS_64(qp->hw_sq_size, IRDMAQPC_SQSIZE) | + LS_64(udp->ttl, IRDMAQPC_TTL) | LS_64(udp->tos, IRDMAQPC_TOS) | + LS_64(udp->src_port, IRDMAQPC_SRCPORTNUM) | + LS_64(udp->dst_port, IRDMAQPC_DESTPORTNUM)); + set_64bit_val(qp_ctx, 32, + LS_64(udp->dest_ip_addr2, IRDMAQPC_DESTIPADDR2) | + LS_64(udp->dest_ip_addr3, IRDMAQPC_DESTIPADDR3)); + set_64bit_val(qp_ctx, 40, + LS_64(udp->dest_ip_addr0, IRDMAQPC_DESTIPADDR0) | + LS_64(udp->dest_ip_addr1, IRDMAQPC_DESTIPADDR1)); + set_64bit_val(qp_ctx, 48, + LS_64(udp->snd_mss, IRDMAQPC_SNDMSS) | + LS_64(udp->vlan_tag, IRDMAQPC_VLANTAG) | + LS_64(udp->arp_idx, IRDMAQPC_ARPIDX)); + set_64bit_val(qp_ctx, 56, + LS_64(roce_info->p_key, IRDMAQPC_PKEY) | + LS_64(roce_info->pd_id, IRDMAQPC_PDIDX) | + LS_64(roce_info->ack_credits, IRDMAQPC_ACKCREDITS) | + LS_64(udp->flow_label, IRDMAQPC_FLOWLABEL)); + set_64bit_val(qp_ctx, 64, + LS_64(roce_info->qkey, IRDMAQPC_QKEY) | + LS_64(roce_info->dest_qp, IRDMAQPC_DESTQP)); + set_64bit_val(qp_ctx, 80, + LS_64(udp->psn_nxt, IRDMAQPC_PSNNXT) | + LS_64(udp->lsn, IRDMAQPC_LSN)); + set_64bit_val(qp_ctx, 88, LS_64(udp->epsn, IRDMAQPC_EPSN)); + set_64bit_val(qp_ctx, 96, + LS_64(udp->psn_max, IRDMAQPC_PSNMAX) | + LS_64(udp->psn_una, IRDMAQPC_PSNUNA)); + set_64bit_val(qp_ctx, 112, + LS_64(udp->cwnd, IRDMAQPC_CWNDROCE)); + set_64bit_val(qp_ctx, 128, + LS_64(roce_info->err_rq_idx, IRDMAQPC_ERR_RQ_IDX) | + LS_64(udp->rnr_nak_thresh, IRDMAQPC_RNRNAK_THRESH) | + LS_64(udp->rexmit_thresh, IRDMAQPC_REXMIT_THRESH)); + set_64bit_val(qp_ctx, 136, + LS_64(info->send_cq_num, IRDMAQPC_TXCQNUM) | + LS_64(info->rcv_cq_num, IRDMAQPC_RXCQNUM)); + set_64bit_val(qp_ctx, 144, + LS_64(info->stats_idx, IRDMAQPC_STAT_INDEX)); + set_64bit_val(qp_ctx, 152, mac); + set_64bit_val(qp_ctx, 160, + LS_64(roce_info->ord_size, IRDMAQPC_ORDSIZE) | + LS_64(roce_info->ird_size, IRDMAQPC_IRDSIZE) | + LS_64(roce_info->wr_rdresp_en, IRDMAQPC_WRRDRSPOK) | + LS_64(roce_info->rd_en, IRDMAQPC_RDOK) | + LS_64(info->stats_idx_valid, IRDMAQPC_USESTATSINSTANCE) | + LS_64(roce_info->bind_en, IRDMAQPC_BINDEN) | + LS_64(roce_info->fast_reg_en, IRDMAQPC_FASTREGEN) | + LS_64(roce_info->dcqcn_en, IRDMAQPC_DCQCNENABLE) | + LS_64(roce_info->rcv_no_icrc, IRDMAQPC_RCVNOICRC) | + LS_64(roce_info->fw_cc_enable, IRDMAQPC_FW_CC_ENABLE) | + LS_64(roce_info->udprivcq_en, IRDMAQPC_UDPRIVCQENABLE) | + LS_64(roce_info->priv_mode_en, IRDMAQPC_PRIVEN) | + LS_64(roce_info->timely_en, IRDMAQPC_TIMELYENABLE)); + set_64bit_val(qp_ctx, 168, + LS_64(info->qp_compl_ctx, IRDMAQPC_QPCOMPCTX)); + set_64bit_val(qp_ctx, 176, + LS_64(qp->sq_tph_val, IRDMAQPC_SQTPHVAL) | + LS_64(qp->rq_tph_val, IRDMAQPC_RQTPHVAL) | + LS_64(qp->qs_handle, IRDMAQPC_QSHANDLE)); + set_64bit_val(qp_ctx, 184, + LS_64(udp->local_ipaddr3, IRDMAQPC_LOCAL_IPADDR3) | + LS_64(udp->local_ipaddr2, IRDMAQPC_LOCAL_IPADDR2)); + set_64bit_val(qp_ctx, 192, + LS_64(udp->local_ipaddr1, IRDMAQPC_LOCAL_IPADDR1) | + LS_64(udp->local_ipaddr0, IRDMAQPC_LOCAL_IPADDR0)); + set_64bit_val(qp_ctx, 200, + LS_64(roce_info->t_high, IRDMAQPC_THIGH) | + LS_64(roce_info->t_low, IRDMAQPC_TLOW)); + set_64bit_val(qp_ctx, 208, + LS_64(info->rem_endpoint_idx, IRDMAQPC_REMENDPOINTIDX)); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "QP_HOST CTX WQE", qp_ctx, + IRDMA_QP_CTX_SIZE); + + return 0; +} + +/** + * irdma_sc_alloc_local_mac_entry - allocate a mac entry + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_alloc_local_mac_entry(struct irdma_sc_cqp *cqp, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = cqp->dev->cqp_ops->cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + hdr = LS_64(IRDMA_CQP_OP_ALLOCATE_LOC_MAC_TABLE_ENTRY, + IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "ALLOCATE_LOCAL_MAC WQE", + wqe, IRDMA_CQP_WQE_SIZE * 8); + + if (post_sq) + cqp->dev->cqp_ops->cqp_post_sq(cqp); + return 0; +} + +/** + * irdma_sc_add_local_mac_entry - add mac enry + * @cqp: struct for cqp hw + * @info:mac addr info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_add_local_mac_entry(struct irdma_sc_cqp *cqp, + struct irdma_local_mac_entry_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 temp, header; + + wqe = cqp->dev->cqp_ops->cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + temp = info->mac_addr[5] | LS_64_1(info->mac_addr[4], 8) | + LS_64_1(info->mac_addr[3], 16) | LS_64_1(info->mac_addr[2], 24) | + LS_64_1(info->mac_addr[1], 32) | LS_64_1(info->mac_addr[0], 40); + + set_64bit_val(wqe, 32, temp); + + header = LS_64(info->entry_idx, IRDMA_CQPSQ_MLM_TABLEIDX) | + LS_64(IRDMA_CQP_OP_MANAGE_LOC_MAC_TABLE, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, header); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "ADD_LOCAL_MAC WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + + if (post_sq) + cqp->dev->cqp_ops->cqp_post_sq(cqp); + return 0; +} + +/** + * irdma_sc_del_local_mac_entry - cqp wqe to dele local mac + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @entry_idx: index of mac entry + * @ignore_ref_count: to force mac adde delete + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_del_local_mac_entry(struct irdma_sc_cqp *cqp, u64 scratch, + u16 entry_idx, u8 ignore_ref_count, bool post_sq) +{ + __le64 *wqe; + u64 header; + + wqe = cqp->dev->cqp_ops->cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + header = LS_64(entry_idx, IRDMA_CQPSQ_MLM_TABLEIDX) | + LS_64(IRDMA_CQP_OP_MANAGE_LOC_MAC_TABLE, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_MLM_FREEENTRY) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID) | + LS_64(ignore_ref_count, IRDMA_CQPSQ_MLM_IGNORE_REF_CNT); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, header); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "DEL_LOCAL_MAC_IPADDR WQE", + wqe, IRDMA_CQP_WQE_SIZE * 8); + + if (post_sq) + cqp->dev->cqp_ops->cqp_post_sq(cqp); + return 0; +} + +/** + * irdma_sc_qp_setctx - set qp's context + * @qp: sc qp + * @qp_ctx: context ptr + * @info: ctx info + */ +static enum irdma_status_code +irdma_sc_qp_setctx(struct irdma_sc_qp *qp, __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info) +{ + struct irdma_iwarp_offload_info *iw; + struct irdma_tcp_offload_info *tcp; + struct irdma_sc_dev *dev; + u8 push_mode_en; + u16 push_idx; + u64 qw0, qw3, qw7 = 0; + u64 mac = 0; + + iw = info->iwarp_info; + tcp = info->tcp_info; + dev = qp->dev; + + if (dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) { + mac = LS_64_1(iw->mac_addr[5], 16) | + LS_64_1(iw->mac_addr[4], 24) | + LS_64_1(iw->mac_addr[3], 32) | + LS_64_1(iw->mac_addr[2], 40) | + LS_64_1(iw->mac_addr[1], 48) | + LS_64_1(iw->mac_addr[0], 56); + } + + qp->user_pri = info->user_pri; + if (info->add_to_qoslist) + irdma_qp_add_qos(qp); + + if (qp->push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX) { + push_mode_en = 0; + push_idx = 0; + } else { + push_mode_en = 1; + push_idx = qp->push_idx; + } + qw0 = LS_64(qp->qp_uk.rq_wqe_size, IRDMAQPC_RQWQESIZE) | + LS_64(iw->err_rq_idx_valid, IRDMAQPC_ERR_RQ_IDX_VALID) | + LS_64(qp->rcv_tph_en, IRDMAQPC_RCVTPHEN) | + LS_64(qp->xmit_tph_en, IRDMAQPC_XMITTPHEN) | + LS_64(qp->rq_tph_en, IRDMAQPC_RQTPHEN) | + LS_64(qp->sq_tph_en, IRDMAQPC_SQTPHEN) | + LS_64(push_idx, IRDMAQPC_PPIDX) | + LS_64(push_mode_en, IRDMAQPC_PMENA) | + LS_64(iw->ib_rd_en, IRDMAQPC_IBRDENABLE) | + LS_64(iw->pd_id >> 16, IRDMAQPC_PDIDXHI); + + set_64bit_val(qp_ctx, 8, qp->sq_pa); + set_64bit_val(qp_ctx, 16, qp->rq_pa); + + qw3 = LS_64(qp->hw_rq_size, IRDMAQPC_RQSIZE) | + LS_64(qp->hw_sq_size, IRDMAQPC_SQSIZE); + if (dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + qw3 |= LS_64(qp->src_mac_addr_idx, IRDMAQPC_GEN1_SRCMACADDRIDX); + set_64bit_val(qp_ctx, 128, + LS_64(iw->err_rq_idx, IRDMAQPC_ERR_RQ_IDX)); + set_64bit_val(qp_ctx, 136, + LS_64(info->send_cq_num, IRDMAQPC_TXCQNUM) | + LS_64(info->rcv_cq_num, IRDMAQPC_RXCQNUM)); + set_64bit_val(qp_ctx, 168, + LS_64(info->qp_compl_ctx, IRDMAQPC_QPCOMPCTX)); + set_64bit_val(qp_ctx, 176, + LS_64(qp->sq_tph_val, IRDMAQPC_SQTPHVAL) | + LS_64(qp->rq_tph_val, IRDMAQPC_RQTPHVAL) | + LS_64(qp->qs_handle, IRDMAQPC_QSHANDLE) | + LS_64(qp->ieq_qp, IRDMAQPC_EXCEPTION_LAN_QUEUE)); + if (info->iwarp_info_valid) { + qw0 |= LS_64(iw->ddp_ver, IRDMAQPC_DDP_VER) | + LS_64(iw->rdmap_ver, IRDMAQPC_RDMAP_VER) | + LS_64(iw->dctcp_en, IRDMAQPC_DC_TCP_EN) | + LS_64(iw->ecn_en, IRDMAQPC_ECN_EN); + qw7 |= LS_64(iw->pd_id, IRDMAQPC_PDIDX); + set_64bit_val(qp_ctx, 144, + LS_64(qp->q2_pa >> 8, IRDMAQPC_Q2ADDR) | + LS_64(info->stats_idx, IRDMAQPC_STAT_INDEX)); + set_64bit_val(qp_ctx, 152, + mac | LS_64(iw->last_byte_sent, IRDMAQPC_LASTBYTESENT)); + set_64bit_val(qp_ctx, 160, + LS_64(iw->ord_size, IRDMAQPC_ORDSIZE) | + LS_64(iw->ird_size, IRDMAQPC_IRDSIZE) | + LS_64(iw->wr_rdresp_en, IRDMAQPC_WRRDRSPOK) | + LS_64(iw->rd_en, IRDMAQPC_RDOK) | + LS_64(iw->snd_mark_en, IRDMAQPC_SNDMARKERS) | + LS_64(iw->bind_en, IRDMAQPC_BINDEN) | + LS_64(iw->fast_reg_en, IRDMAQPC_FASTREGEN) | + LS_64(iw->priv_mode_en, IRDMAQPC_PRIVEN) | + LS_64(info->stats_idx_valid, IRDMAQPC_USESTATSINSTANCE) | + LS_64(1, IRDMAQPC_IWARPMODE) | + LS_64(iw->rcv_mark_en, IRDMAQPC_RCVMARKERS) | + LS_64(iw->align_hdrs, IRDMAQPC_ALIGNHDRS) | + LS_64(iw->rcv_no_mpa_crc, IRDMAQPC_RCVNOMPACRC) | + LS_64(iw->rcv_mark_offset || !tcp ? iw->rcv_mark_offset : tcp->rcv_nxt, IRDMAQPC_RCVMARKOFFSET) | + LS_64(iw->snd_mark_offset || !tcp ? iw->snd_mark_offset : tcp->snd_nxt, IRDMAQPC_SNDMARKOFFSET) | + LS_64(iw->timely_en, IRDMAQPC_TIMELYENABLE)); + } + if (info->tcp_info_valid) { + qw0 |= LS_64(tcp->ipv4, IRDMAQPC_IPV4) | + LS_64(tcp->no_nagle, IRDMAQPC_NONAGLE) | + LS_64(tcp->insert_vlan_tag, IRDMAQPC_INSERTVLANTAG) | + LS_64(tcp->time_stamp, IRDMAQPC_TIMESTAMP) | + LS_64(tcp->cwnd_inc_limit, IRDMAQPC_LIMIT) | + LS_64(tcp->drop_ooo_seg, IRDMAQPC_DROPOOOSEG) | + LS_64(tcp->dup_ack_thresh, IRDMAQPC_DUPACK_THRESH); + + if ((iw->ecn_en || iw->dctcp_en) && !(tcp->tos & 0x03)) + tcp->tos |= ECN_CODE_PT_VAL; + + qw3 |= LS_64(tcp->ttl, IRDMAQPC_TTL) | + LS_64(tcp->avoid_stretch_ack, IRDMAQPC_AVOIDSTRETCHACK) | + LS_64(tcp->tos, IRDMAQPC_TOS) | + LS_64(tcp->src_port, IRDMAQPC_SRCPORTNUM) | + LS_64(tcp->dst_port, IRDMAQPC_DESTPORTNUM); + if (dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) { + qw3 |= LS_64(tcp->src_mac_addr_idx, + IRDMAQPC_GEN1_SRCMACADDRIDX); + + qp->src_mac_addr_idx = tcp->src_mac_addr_idx; + } + set_64bit_val(qp_ctx, 32, + LS_64(tcp->dest_ip_addr2, IRDMAQPC_DESTIPADDR2) | + LS_64(tcp->dest_ip_addr3, IRDMAQPC_DESTIPADDR3)); + set_64bit_val(qp_ctx, 40, + LS_64(tcp->dest_ip_addr0, IRDMAQPC_DESTIPADDR0) | + LS_64(tcp->dest_ip_addr1, IRDMAQPC_DESTIPADDR1)); + set_64bit_val(qp_ctx, 48, + LS_64(tcp->snd_mss, IRDMAQPC_SNDMSS) | + LS_64(tcp->syn_rst_handling, IRDMAQPC_SYN_RST_HANDLING) | + LS_64(tcp->vlan_tag, IRDMAQPC_VLANTAG) | + LS_64(tcp->arp_idx, IRDMAQPC_ARPIDX)); + qw7 |= LS_64(tcp->flow_label, IRDMAQPC_FLOWLABEL) | + LS_64(tcp->wscale, IRDMAQPC_WSCALE) | + LS_64(tcp->ignore_tcp_opt, IRDMAQPC_IGNORE_TCP_OPT) | + LS_64(tcp->ignore_tcp_uns_opt, + IRDMAQPC_IGNORE_TCP_UNS_OPT) | + LS_64(tcp->tcp_state, IRDMAQPC_TCPSTATE) | + LS_64(tcp->rcv_wscale, IRDMAQPC_RCVSCALE) | + LS_64(tcp->snd_wscale, IRDMAQPC_SNDSCALE); + set_64bit_val(qp_ctx, 72, + LS_64(tcp->time_stamp_recent, IRDMAQPC_TIMESTAMP_RECENT) | + LS_64(tcp->time_stamp_age, IRDMAQPC_TIMESTAMP_AGE)); + set_64bit_val(qp_ctx, 80, + LS_64(tcp->snd_nxt, IRDMAQPC_SNDNXT) | + LS_64(tcp->snd_wnd, IRDMAQPC_SNDWND)); + set_64bit_val(qp_ctx, 88, + LS_64(tcp->rcv_nxt, IRDMAQPC_RCVNXT) | + LS_64(tcp->rcv_wnd, IRDMAQPC_RCVWND)); + set_64bit_val(qp_ctx, 96, + LS_64(tcp->snd_max, IRDMAQPC_SNDMAX) | + LS_64(tcp->snd_una, IRDMAQPC_SNDUNA)); + set_64bit_val(qp_ctx, 104, + LS_64(tcp->srtt, IRDMAQPC_SRTT) | + LS_64(tcp->rtt_var, IRDMAQPC_RTTVAR)); + set_64bit_val(qp_ctx, 112, + LS_64(tcp->ss_thresh, IRDMAQPC_SSTHRESH) | + LS_64(tcp->cwnd, IRDMAQPC_CWND)); + set_64bit_val(qp_ctx, 120, + LS_64(tcp->snd_wl1, IRDMAQPC_SNDWL1) | + LS_64(tcp->snd_wl2, IRDMAQPC_SNDWL2)); + set_64bit_val(qp_ctx, 128, + LS_64(tcp->max_snd_window, IRDMAQPC_MAXSNDWND) | + LS_64(tcp->rexmit_thresh, IRDMAQPC_REXMIT_THRESH)); + set_64bit_val(qp_ctx, 184, + LS_64(tcp->local_ipaddr3, IRDMAQPC_LOCAL_IPADDR3) | + LS_64(tcp->local_ipaddr2, IRDMAQPC_LOCAL_IPADDR2)); + set_64bit_val(qp_ctx, 192, + LS_64(tcp->local_ipaddr1, IRDMAQPC_LOCAL_IPADDR1) | + LS_64(tcp->local_ipaddr0, IRDMAQPC_LOCAL_IPADDR0)); + set_64bit_val(qp_ctx, 200, + LS_64(iw->t_high, IRDMAQPC_THIGH) | + LS_64(iw->t_low, IRDMAQPC_TLOW)); + set_64bit_val(qp_ctx, 208, + LS_64(info->rem_endpoint_idx, IRDMAQPC_REMENDPOINTIDX)); + } + + set_64bit_val(qp_ctx, 0, qw0); + set_64bit_val(qp_ctx, 24, qw3); + set_64bit_val(qp_ctx, 56, qw7); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "QP_HOST CTX", qp_ctx, + IRDMA_QP_CTX_SIZE); + + return 0; +} + +/** + * irdma_sc_alloc_stag - mr stag alloc + * @dev: sc device struct + * @info: stag info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_alloc_stag(struct irdma_sc_dev *dev, + struct irdma_allocate_stag_info *info, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + enum irdma_page_size page_size; + + if (info->page_size == 0x40000000) + page_size = IRDMA_PAGE_SIZE_1G; + else if (info->page_size == 0x200000) + page_size = IRDMA_PAGE_SIZE_2M; + else + page_size = IRDMA_PAGE_SIZE_4K; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 8, + FLD_LS_64(dev, info->pd_id, IRDMA_CQPSQ_STAG_PDID) | + LS_64(info->total_len, IRDMA_CQPSQ_STAG_STAGLEN)); + set_64bit_val(wqe, 16, + LS_64(info->stag_idx, IRDMA_CQPSQ_STAG_IDX)); + set_64bit_val(wqe, 40, + LS_64(info->hmc_fcn_index, IRDMA_CQPSQ_STAG_HMCFNIDX)); + + if (info->chunk_size) + set_64bit_val(wqe, 48, + LS_64(info->first_pm_pbl_idx, IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX)); + + hdr = LS_64(IRDMA_CQP_OP_ALLOC_STAG, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_STAG_MR) | + LS_64(info->access_rights, IRDMA_CQPSQ_STAG_ARIGHTS) | + LS_64(info->chunk_size, IRDMA_CQPSQ_STAG_LPBLSIZE) | + LS_64(page_size, IRDMA_CQPSQ_STAG_HPAGESIZE) | + LS_64(info->remote_access, IRDMA_CQPSQ_STAG_REMACCENABLED) | + LS_64(info->use_hmc_fcn_index, IRDMA_CQPSQ_STAG_USEHMCFNIDX) | + LS_64(info->use_pf_rid, IRDMA_CQPSQ_STAG_USEPFRID) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "ALLOC_STAG WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_mr_reg_non_shared - non-shared mr registration + * @dev: sc device struct + * @info: mr info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_mr_reg_non_shared(struct irdma_sc_dev *dev, + struct irdma_reg_ns_stag_info *info, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + u64 temp; + struct irdma_sc_cqp *cqp; + u64 hdr; + u32 pble_obj_cnt; + bool remote_access; + u8 addr_type; + enum irdma_page_size page_size; + + if (info->page_size == 0x40000000) + page_size = IRDMA_PAGE_SIZE_1G; + else if (info->page_size == 0x200000) + page_size = IRDMA_PAGE_SIZE_2M; + else + page_size = IRDMA_PAGE_SIZE_4K; + + if (info->access_rights & (IRDMA_ACCESS_FLAGS_REMOTEREAD_ONLY | + IRDMA_ACCESS_FLAGS_REMOTEWRITE_ONLY)) + remote_access = true; + else + remote_access = false; + + pble_obj_cnt = dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + if (info->chunk_size && info->first_pm_pbl_index >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + temp = (info->addr_type == IRDMA_ADDR_TYPE_VA_BASED) ? + (uintptr_t)info->va : info->fbo; + + set_64bit_val(wqe, 0, temp); + set_64bit_val(wqe, 8, + LS_64(info->total_len, IRDMA_CQPSQ_STAG_STAGLEN) | + FLD_LS_64(dev, info->pd_id, IRDMA_CQPSQ_STAG_PDID)); + set_64bit_val(wqe, 16, + LS_64(info->stag_key, IRDMA_CQPSQ_STAG_KEY) | + LS_64(info->stag_idx, IRDMA_CQPSQ_STAG_IDX)); + if (!info->chunk_size) { + set_64bit_val(wqe, 32, info->reg_addr_pa); + set_64bit_val(wqe, 48, 0); + } else { + set_64bit_val(wqe, 32, 0); + set_64bit_val(wqe, 48, + LS_64(info->first_pm_pbl_index, IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX)); + } + set_64bit_val(wqe, 40, info->hmc_fcn_index); + set_64bit_val(wqe, 56, 0); + + addr_type = (info->addr_type == IRDMA_ADDR_TYPE_VA_BASED) ? 1 : 0; + hdr = LS_64(IRDMA_CQP_OP_REG_MR, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_STAG_MR) | + LS_64(info->chunk_size, IRDMA_CQPSQ_STAG_LPBLSIZE) | + LS_64(page_size, IRDMA_CQPSQ_STAG_HPAGESIZE) | + LS_64(info->access_rights, IRDMA_CQPSQ_STAG_ARIGHTS) | + LS_64(remote_access, IRDMA_CQPSQ_STAG_REMACCENABLED) | + LS_64(addr_type, IRDMA_CQPSQ_STAG_VABASEDTO) | + LS_64(info->use_hmc_fcn_index, IRDMA_CQPSQ_STAG_USEHMCFNIDX) | + LS_64(info->use_pf_rid, IRDMA_CQPSQ_STAG_USEPFRID) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "MR_REG_NS WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_mr_reg_shared - registered shared memory region + * @dev: sc device struct + * @info: info for shared memory registration + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_mr_reg_shared(struct irdma_sc_dev *dev, + struct irdma_register_shared_stag *info, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 temp, va64, fbo, hdr; + u32 va32; + bool remote_access; + u8 addr_type; + + if (info->access_rights & (IRDMA_ACCESS_FLAGS_REMOTEREAD_ONLY | + IRDMA_ACCESS_FLAGS_REMOTEWRITE_ONLY)) + remote_access = true; + else + remote_access = false; + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + va64 = (uintptr_t)(info->va); + va32 = (u32)(va64 & 0x00000000FFFFFFFF); + fbo = (u64)(va32 & (4096 - 1)); + + set_64bit_val(wqe, 0, + (info->addr_type == IRDMA_ADDR_TYPE_VA_BASED ? + (uintptr_t)info->va : fbo)); + set_64bit_val(wqe, 8, + FLD_LS_64(dev, info->pd_id, IRDMA_CQPSQ_STAG_PDID)); + temp = LS_64(info->new_stag_key, IRDMA_CQPSQ_STAG_KEY) | + LS_64(info->new_stag_idx, IRDMA_CQPSQ_STAG_IDX) | + LS_64(info->parent_stag_idx, IRDMA_CQPSQ_STAG_PARENTSTAGIDX); + set_64bit_val(wqe, 16, temp); + + addr_type = (info->addr_type == IRDMA_ADDR_TYPE_VA_BASED) ? 1 : 0; + hdr = LS_64(IRDMA_CQP_OP_REG_SMR, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_STAG_MR) | + LS_64(info->access_rights, IRDMA_CQPSQ_STAG_ARIGHTS) | + LS_64(remote_access, IRDMA_CQPSQ_STAG_REMACCENABLED) | + LS_64(addr_type, IRDMA_CQPSQ_STAG_VABASEDTO) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "MR_REG_SHARED WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_dealloc_stag - deallocate stag + * @dev: sc device struct + * @info: dealloc stag info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_dealloc_stag(struct irdma_sc_dev *dev, + struct irdma_dealloc_stag_info *info, u64 scratch, + bool post_sq) +{ + u64 hdr; + __le64 *wqe; + struct irdma_sc_cqp *cqp; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 8, + FLD_LS_64(dev, info->pd_id, IRDMA_CQPSQ_STAG_PDID)); + set_64bit_val(wqe, 16, + LS_64(info->stag_idx, IRDMA_CQPSQ_STAG_IDX)); + + hdr = LS_64(IRDMA_CQP_OP_DEALLOC_STAG, IRDMA_CQPSQ_OPCODE) | + LS_64(info->mr, IRDMA_CQPSQ_STAG_MR) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "DEALLOC_STAG WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_query_stag - query hardware for stag + * @dev: sc device struct + * @scratch: u64 saved to be used during cqp completion + * @stag_index: stag index for query + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_query_stag(struct irdma_sc_dev *dev, + u64 scratch, u32 stag_index, + bool post_sq) +{ + u64 hdr; + __le64 *wqe; + struct irdma_sc_cqp *cqp; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, + LS_64(stag_index, IRDMA_CQPSQ_QUERYSTAG_IDX)); + + hdr = LS_64(IRDMA_CQP_OP_QUERY_STAG, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "QUERY_STAG WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_mw_alloc - mw allocate + * @dev: sc device struct + * @info: memory window allocation information + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_mw_alloc(struct irdma_sc_dev *dev, struct irdma_mw_alloc_info *info, + u64 scratch, bool post_sq) +{ + u64 hdr; + struct irdma_sc_cqp *cqp; + __le64 *wqe; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 8, + FLD_LS_64(dev, info->pd_id, IRDMA_CQPSQ_STAG_PDID)); + set_64bit_val(wqe, 16, + LS_64(info->mw_stag_index, IRDMA_CQPSQ_STAG_IDX)); + + hdr = LS_64(IRDMA_CQP_OP_ALLOC_STAG, IRDMA_CQPSQ_OPCODE) | + LS_64(info->mw_wide, IRDMA_CQPSQ_STAG_MWTYPE) | + LS_64(info->mw1_bind_dont_vldt_key, + IRDMA_CQPSQ_STAG_MW1_BIND_DONT_VLDT_KEY) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "MW_ALLOC WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_mr_fast_register - Posts RDMA fast register mr WR to iwarp qp + * @qp: sc qp struct + * @info: fast mr info + * @post_sq: flag for cqp db to ring + */ +enum irdma_status_code +irdma_sc_mr_fast_register(struct irdma_sc_qp *qp, + struct irdma_fast_reg_stag_info *info, bool post_sq) +{ + u64 temp, hdr; + __le64 *wqe; + u32 wqe_idx; + enum irdma_page_size page_size; + struct irdma_post_sq_info sq_info = {}; + + if (info->page_size == 0x40000000) + page_size = IRDMA_PAGE_SIZE_1G; + else if (info->page_size == 0x200000) + page_size = IRDMA_PAGE_SIZE_2M; + else + page_size = IRDMA_PAGE_SIZE_4K; + + sq_info.wr_id = info->wr_id; + sq_info.signaled = info->signaled; + sq_info.push_wqe = info->push_wqe; + + wqe = irdma_qp_get_next_send_wqe(&qp->qp_uk, &wqe_idx, + IRDMA_QP_WQE_MIN_QUANTA, 0, &sq_info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(&qp->qp_uk, wqe_idx); + + dev_dbg(rfdev_to_dev(qp->dev), + "MR: wr_id[%llxh] wqe_idx[%04d] location[%p]\n", info->wr_id, + wqe_idx, &qp->qp_uk.sq_wrtrk_array[wqe_idx].wrid); + + temp = (info->addr_type == IRDMA_ADDR_TYPE_VA_BASED) ? + (uintptr_t)info->va : info->fbo; + set_64bit_val(wqe, 0, temp); + + temp = RS_64(info->first_pm_pbl_index >> 16, IRDMAQPSQ_FIRSTPMPBLIDXHI); + set_64bit_val(wqe, 8, + LS_64(temp, IRDMAQPSQ_FIRSTPMPBLIDXHI) | + LS_64(info->reg_addr_pa >> IRDMAQPSQ_PBLADDR_S, IRDMAQPSQ_PBLADDR)); + set_64bit_val(wqe, 16, + info->total_len | + LS_64(info->first_pm_pbl_index, IRDMAQPSQ_FIRSTPMPBLIDXLO)); + + hdr = LS_64(info->stag_key, IRDMAQPSQ_STAGKEY) | + LS_64(info->stag_idx, IRDMAQPSQ_STAGINDEX) | + LS_64(IRDMAQP_OP_FAST_REGISTER, IRDMAQPSQ_OPCODE) | + LS_64(info->chunk_size, IRDMAQPSQ_LPBLSIZE) | + LS_64(page_size, IRDMAQPSQ_HPAGESIZE) | + LS_64(info->access_rights, IRDMAQPSQ_STAGRIGHTS) | + LS_64(info->addr_type, IRDMAQPSQ_VABASEDTO) | + LS_64((sq_info.push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(info->read_fence, IRDMAQPSQ_READFENCE) | + LS_64(info->local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "FAST_REG WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); + if (sq_info.push_wqe) { + irdma_qp_push_wqe(&qp->qp_uk, wqe, IRDMA_QP_WQE_MIN_QUANTA, + wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(&qp->qp_uk); + } + + return 0; +} + +/** + * irdma_sc_gen_rts_ae - request AE generated after RTS + * @qp: sc qp struct + */ +static void irdma_sc_gen_rts_ae(struct irdma_sc_qp *qp) +{ + __le64 *wqe; + u64 hdr; + struct irdma_qp_uk *qp_uk; + + qp_uk = &qp->qp_uk; + + wqe = qp_uk->sq_base[1].elem; + + hdr = LS_64(IRDMAQP_OP_NOP, IRDMAQPSQ_OPCODE) | + LS_64(1, IRDMAQPSQ_LOCALFENCE) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + irdma_debug_buf(qp->dev, IRDMA_DEBUG_QP, "NOP W/LOCAL FENCE WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); + + wqe = qp_uk->sq_base[2].elem; + hdr = LS_64(IRDMAQP_OP_GEN_RTS_AE, IRDMAQPSQ_OPCODE) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + irdma_debug_buf(qp->dev, IRDMA_DEBUG_QP, "CONN EST WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); +} + +/** + * irdma_sc_send_lsmm - send last streaming mode message + * @qp: sc qp struct + * @lsmm_buf: buffer with lsmm message + * @size: size of lsmm buffer + * @stag: stag of lsmm buffer + */ +static void irdma_sc_send_lsmm(struct irdma_sc_qp *qp, void *lsmm_buf, u32 size, + irdma_stag stag) +{ + __le64 *wqe; + u64 hdr; + struct irdma_qp_uk *qp_uk; + + qp_uk = &qp->qp_uk; + wqe = qp_uk->sq_base->elem; + + set_64bit_val(wqe, 0, (uintptr_t)lsmm_buf); + if (qp->qp_uk.uk_attrs->hw_rev == IRDMA_GEN_1) { + set_64bit_val(wqe, 8, + LS_64(size, IRDMAQPSQ_GEN1_FRAG_LEN) | + LS_64(stag, IRDMAQPSQ_GEN1_FRAG_STAG)); + } else { + set_64bit_val(wqe, 8, + LS_64(size, IRDMAQPSQ_FRAG_LEN) | + LS_64(stag, IRDMAQPSQ_FRAG_STAG) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID)); + } + set_64bit_val(wqe, 16, 0); + + hdr = LS_64(IRDMAQP_OP_RDMA_SEND, IRDMAQPSQ_OPCODE) | + LS_64(1, IRDMAQPSQ_STREAMMODE) | + LS_64(1, IRDMAQPSQ_WAITFORRCVPDU) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "SEND_LSMM WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); + + if (qp->dev->hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_RTS_AE) + irdma_sc_gen_rts_ae(qp); +} + +/** + * irdma_sc_send_lsmm_nostag - for privilege qp + * @qp: sc qp struct + * @lsmm_buf: buffer with lsmm message + * @size: size of lsmm buffer + */ +static void irdma_sc_send_lsmm_nostag(struct irdma_sc_qp *qp, void *lsmm_buf, + u32 size) +{ + __le64 *wqe; + u64 hdr; + struct irdma_qp_uk *qp_uk; + + qp_uk = &qp->qp_uk; + wqe = qp_uk->sq_base->elem; + + set_64bit_val(wqe, 0, (uintptr_t)lsmm_buf); + + if (qp->qp_uk.uk_attrs->hw_rev == IRDMA_GEN_1) + set_64bit_val(wqe, 8, + LS_64(size, IRDMAQPSQ_GEN1_FRAG_LEN)); + else + set_64bit_val(wqe, 8, + LS_64(size, IRDMAQPSQ_FRAG_LEN) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID)); + set_64bit_val(wqe, 16, 0); + + hdr = LS_64(IRDMAQP_OP_RDMA_SEND, IRDMAQPSQ_OPCODE) | + LS_64(1, IRDMAQPSQ_STREAMMODE) | + LS_64(1, IRDMAQPSQ_WAITFORRCVPDU) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "SEND_LSMM_NOSTAG WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); +} + +/** + * irdma_sc_send_rtt - send last read0 or write0 + * @qp: sc qp struct + * @read: Do read0 or write0 + */ +static void irdma_sc_send_rtt(struct irdma_sc_qp *qp, bool read) +{ + __le64 *wqe; + u64 hdr; + struct irdma_qp_uk *qp_uk; + + qp_uk = &qp->qp_uk; + wqe = qp_uk->sq_base->elem; + + set_64bit_val(wqe, 0, 0); + set_64bit_val(wqe, 16, 0); + if (read) { + if (qp->qp_uk.uk_attrs->hw_rev == IRDMA_GEN_1) { + set_64bit_val(wqe, 8, + LS_64(0xabcd, IRDMAQPSQ_GEN1_FRAG_STAG)); + } else { + set_64bit_val(wqe, 8, + (u64)0xabcd | LS_64(qp->qp_uk.swqe_polarity, + IRDMAQPSQ_VALID)); + } + hdr = LS_64(0x1234, IRDMAQPSQ_REMSTAG) | + LS_64(IRDMAQP_OP_RDMA_READ, IRDMAQPSQ_OPCODE) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + + } else { + if (qp->qp_uk.uk_attrs->hw_rev == IRDMA_GEN_1) { + set_64bit_val(wqe, 8, 0); + } else { + set_64bit_val(wqe, 8, + LS_64(qp->qp_uk.swqe_polarity, + IRDMAQPSQ_VALID)); + } + hdr = LS_64(IRDMAQP_OP_RDMA_WRITE, IRDMAQPSQ_OPCODE) | + LS_64(qp->qp_uk.swqe_polarity, IRDMAQPSQ_VALID); + } + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(qp->dev, IRDMA_DEBUG_WQE, "RTR WQE", wqe, + IRDMA_QP_WQE_MIN_SIZE); + + if (qp->dev->hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_RTS_AE) + irdma_sc_gen_rts_ae(qp); +} + +/** + * irdma_iwarp_opcode - determine if incoming is rdma layer + * @info: aeq info for the packet + * @pkt: packet for error + */ +static u32 irdma_iwarp_opcode(struct irdma_aeqe_info *info, u8 *pkt) +{ + __be16 *mpa; + u32 opcode = 0xffffffff; + + if (info->q2_data_written) { + mpa = (__be16 *)pkt; + opcode = ntohs(mpa[1]) & 0xf; + } + + return opcode; +} + +/** + * irdma_locate_mpa - return pointer to mpa in the pkt + * @pkt: packet with data + */ +static u8 *irdma_locate_mpa(u8 *pkt) +{ + /* skip over ethernet header */ + pkt += IRDMA_MAC_HLEN; + + /* Skip over IP and TCP headers */ + pkt += 4 * (pkt[0] & 0x0f); + pkt += 4 * ((pkt[12] >> 4) & 0x0f); + + return pkt; +} + +/** + * irdma_bld_termhdr_ctrl - setup terminate hdr control fields + * @qp: sc qp ptr for pkt + * @hdr: term hdr + * @opcode: flush opcode for termhdr + * @layer_etype: error layer + error type + * @err: error cod ein the header + */ +static void irdma_bld_termhdr_ctrl(struct irdma_sc_qp *qp, + struct irdma_terminate_hdr *hdr, + enum irdma_flush_opcode opcode, + u8 layer_etype, u8 err) +{ + qp->flush_code = opcode; + hdr->layer_etype = layer_etype; + hdr->error_code = err; +} + +/** + * irdma_bld_termhdr_ddp_rdma - setup ddp and rdma hdrs in terminate hdr + * @pkt: ptr to mpa in offending pkt + * @hdr: term hdr + * @copy_len: offending pkt length to be copied to term hdr + * @is_tagged: DDP tagged or untagged + */ +static void irdma_bld_termhdr_ddp_rdma(u8 *pkt, struct irdma_terminate_hdr *hdr, + int *copy_len, u8 *is_tagged) +{ + u16 ddp_seg_len; + + ddp_seg_len = ntohs(*(__be16 *)pkt); + if (ddp_seg_len) { + *copy_len = 2; + hdr->hdrct = DDP_LEN_FLAG; + if (pkt[2] & 0x80) { + *is_tagged = 1; + if (ddp_seg_len >= TERM_DDP_LEN_TAGGED) { + *copy_len += TERM_DDP_LEN_TAGGED; + hdr->hdrct |= DDP_HDR_FLAG; + } + } else { + if (ddp_seg_len >= TERM_DDP_LEN_UNTAGGED) { + *copy_len += TERM_DDP_LEN_UNTAGGED; + hdr->hdrct |= DDP_HDR_FLAG; + } + if (ddp_seg_len >= (TERM_DDP_LEN_UNTAGGED + TERM_RDMA_LEN) && + ((pkt[3] & RDMA_OPCODE_M) == RDMA_READ_REQ_OPCODE)) { + *copy_len += TERM_RDMA_LEN; + hdr->hdrct |= RDMA_HDR_FLAG; + } + } + } +} + +/** + * irdma_bld_terminate_hdr - build terminate message header + * @qp: qp associated with received terminate AE + * @info: the struct contiaing AE information + */ +static int irdma_bld_terminate_hdr(struct irdma_sc_qp *qp, + struct irdma_aeqe_info *info) +{ + u8 *pkt = qp->q2_buf + Q2_BAD_FRAME_OFFSET; + int copy_len = 0; + u8 is_tagged = 0; + u32 opcode; + struct irdma_terminate_hdr *termhdr; + + termhdr = (struct irdma_terminate_hdr *)qp->q2_buf; + memset(termhdr, 0, Q2_BAD_FRAME_OFFSET); + + if (info->q2_data_written) { + pkt = irdma_locate_mpa(pkt); + irdma_bld_termhdr_ddp_rdma(pkt, termhdr, ©_len, &is_tagged); + } + + opcode = irdma_iwarp_opcode(info, pkt); + qp->eventtype = TERM_EVENT_QP_FATAL; + + switch (info->ae_id) { + case IRDMA_AE_AMP_UNALLOCATED_STAG: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + if (opcode == IRDMA_OP_TYPE_RDMA_WRITE) + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_PROT_ERR, + (LAYER_DDP << 4) | DDP_TAGGED_BUF, + DDP_TAGGED_INV_STAG); + else + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_INV_STAG); + break; + case IRDMA_AE_AMP_BOUNDS_VIOLATION: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + if (info->q2_data_written) + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_PROT_ERR, + (LAYER_DDP << 4) | DDP_TAGGED_BUF, + DDP_TAGGED_BOUNDS); + else + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_INV_BOUNDS); + break; + case IRDMA_AE_AMP_BAD_PD: + switch (opcode) { + case IRDMA_OP_TYPE_RDMA_WRITE: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_PROT_ERR, + (LAYER_DDP << 4) | DDP_TAGGED_BUF, + DDP_TAGGED_UNASSOC_STAG); + break; + case IRDMA_OP_TYPE_SEND_INV: + case IRDMA_OP_TYPE_SEND_SOL_INV: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_CANT_INV_STAG); + break; + default: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_UNASSOC_STAG); + } + break; + case IRDMA_AE_AMP_INVALID_STAG: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_INV_STAG); + break; + case IRDMA_AE_AMP_BAD_QP: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_LOC_QP_OP_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_QN); + break; + case IRDMA_AE_AMP_BAD_STAG_KEY: + case IRDMA_AE_AMP_BAD_STAG_INDEX: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + switch (opcode) { + case IRDMA_OP_TYPE_SEND_INV: + case IRDMA_OP_TYPE_SEND_SOL_INV: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_OP_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_OP, + RDMAP_CANT_INV_STAG); + break; + default: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_OP, + RDMAP_INV_STAG); + } + break; + case IRDMA_AE_AMP_RIGHTS_VIOLATION: + case IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS: + case IRDMA_AE_PRIV_OPERATION_DENIED: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_ACCESS); + break; + case IRDMA_AE_AMP_TO_WRAP: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_ACCESS_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_PROT, + RDMAP_TO_WRAP); + break; + case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_MPA << 4) | DDP_LLP, MPA_CRC); + break; + case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_LOC_LEN_ERR, + (LAYER_DDP << 4) | DDP_CATASTROPHIC, + DDP_CATASTROPHIC_LOCAL); + break; + case IRDMA_AE_LCE_QP_CATASTROPHIC: + case IRDMA_AE_DDP_NO_L_BIT: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_FATAL_ERR, + (LAYER_DDP << 4) | DDP_CATASTROPHIC, + DDP_CATASTROPHIC_LOCAL); + break; + case IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_MSN_RANGE); + break; + case IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER: + qp->eventtype = TERM_EVENT_QP_ACCESS_ERR; + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_LOC_LEN_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_TOO_LONG); + break; + case IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION: + if (is_tagged) + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_DDP << 4) | DDP_TAGGED_BUF, + DDP_TAGGED_INV_DDP_VER); + else + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_DDP_VER); + break; + case IRDMA_AE_DDP_UBE_INVALID_MO: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_MO); + break; + case IRDMA_AE_DDP_UBE_INVALID_MSN_NO_BUFFER_AVAILABLE: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_REM_OP_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_MSN_NO_BUF); + break; + case IRDMA_AE_DDP_UBE_INVALID_QN: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_DDP << 4) | DDP_UNTAGGED_BUF, + DDP_UNTAGGED_INV_QN); + break; + case IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_GENERAL_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_OP, + RDMAP_INV_RDMAP_VER); + break; + default: + irdma_bld_termhdr_ctrl(qp, termhdr, FLUSH_FATAL_ERR, + (LAYER_RDMA << 4) | RDMAP_REMOTE_OP, + RDMAP_UNSPECIFIED); + break; + } + + if (copy_len) + memcpy(termhdr + 1, pkt, copy_len); + + return sizeof(struct irdma_terminate_hdr) + copy_len; +} + +/** + * irdma_terminate_send_fin() - Send fin for terminate message + * @qp: qp associated with received terminate AE + */ +void irdma_terminate_send_fin(struct irdma_sc_qp *qp) +{ + irdma_term_modify_qp(qp, IRDMA_QP_STATE_TERMINATE, + IRDMAQP_TERM_SEND_FIN_ONLY, 0); +} + +/** + * irdma_terminate_connection() - Bad AE and send terminate to remote QP + * @qp: qp associated with received terminate AE + * @info: the struct contiaing AE information + */ +void irdma_terminate_connection(struct irdma_sc_qp *qp, + struct irdma_aeqe_info *info) +{ + u8 termlen = 0; + + if (qp->term_flags & IRDMA_TERM_SENT) + return; + + termlen = irdma_bld_terminate_hdr(qp, info); + irdma_terminate_start_timer(qp); + qp->term_flags |= IRDMA_TERM_SENT; + irdma_term_modify_qp(qp, IRDMA_QP_STATE_TERMINATE, + IRDMAQP_TERM_SEND_TERM_ONLY, termlen); +} + +/** + * irdma_terminate_received - handle terminate received AE + * @qp: qp associated with received terminate AE + * @info: the struct contiaing AE information + */ +void irdma_terminate_received(struct irdma_sc_qp *qp, + struct irdma_aeqe_info *info) +{ + u8 *pkt = qp->q2_buf + Q2_BAD_FRAME_OFFSET; + __be32 *mpa; + u8 ddp_ctl; + u8 rdma_ctl; + u16 aeq_id = 0; + struct irdma_terminate_hdr *termhdr; + + mpa = (__be32 *)irdma_locate_mpa(pkt); + if (info->q2_data_written) { + /* did not validate the frame - do it now */ + ddp_ctl = (ntohl(mpa[0]) >> 8) & 0xff; + rdma_ctl = ntohl(mpa[0]) & 0xff; + if ((ddp_ctl & 0xc0) != 0x40) + aeq_id = IRDMA_AE_LCE_QP_CATASTROPHIC; + else if ((ddp_ctl & 0x03) != 1) + aeq_id = IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION; + else if (ntohl(mpa[2]) != 2) + aeq_id = IRDMA_AE_DDP_UBE_INVALID_QN; + else if (ntohl(mpa[3]) != 1) + aeq_id = IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN; + else if (ntohl(mpa[4]) != 0) + aeq_id = IRDMA_AE_DDP_UBE_INVALID_MO; + else if ((rdma_ctl & 0xc0) != 0x40) + aeq_id = IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION; + + info->ae_id = aeq_id; + if (info->ae_id) { + /* Bad terminate recvd - send back a terminate */ + irdma_terminate_connection(qp, info); + return; + } + } + + qp->term_flags |= IRDMA_TERM_RCVD; + qp->eventtype = TERM_EVENT_QP_FATAL; + termhdr = (struct irdma_terminate_hdr *)&mpa[5]; + if (termhdr->layer_etype == RDMAP_REMOTE_PROT || + termhdr->layer_etype == RDMAP_REMOTE_OP) { + irdma_terminate_done(qp, 0); + } else { + irdma_terminate_start_timer(qp); + irdma_terminate_send_fin(qp); + } +} + +static enum irdma_status_code irdma_null_ws_add(struct irdma_sc_vsi *vsi, + u8 user_pri) +{ + return 0; +} + +static void irdma_null_ws_remove(struct irdma_sc_vsi *vsi, u8 user_pri) +{ + /* do nothing */ +} + +static void irdma_null_ws_reset(struct irdma_sc_vsi *vsi) +{ + /* do nothing */ +} + +/** + * irdma_sc_vsi_init - Init the vsi structure + * @vsi: pointer to vsi structure to initialize + * @info: the info used to initialize the vsi struct + */ +void irdma_sc_vsi_init(struct irdma_sc_vsi *vsi, + struct irdma_vsi_init_info *info) +{ + int i; + u32 reg_data; + u32 __iomem *reg_addr; + struct irdma_l2params *l2p; + + vsi->dev = info->dev; + vsi->back_vsi = info->back_vsi; + vsi->register_qset = info->register_qset; + vsi->unregister_qset = info->unregister_qset; + vsi->mtu = info->params->mtu; + vsi->exception_lan_q = info->exception_lan_q; + vsi->vsi_idx = info->pf_data_vsi_num; + vsi->vm_vf_type = info->vm_vf_type; + vsi->vm_id = info->vm_id; + if (vsi->dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + vsi->fcn_id = info->dev->hmc_fn_id; + + l2p = info->params; + vsi->qos_rel_bw = l2p->vsi_rel_bw; + vsi->qos_prio_type = l2p->vsi_prio_type; + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + if (vsi->dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + vsi->qos[i].qs_handle = l2p->qs_handle_list[i]; + vsi->qos[i].traffic_class = info->params->up2tc[i]; + vsi->qos[i].rel_bw = + l2p->tc_info[vsi->qos[i].traffic_class].rel_bw; + vsi->qos[i].prio_type = + l2p->tc_info[vsi->qos[i].traffic_class].prio_type; + spin_lock_init(&vsi->qos[i].lock); + INIT_LIST_HEAD(&vsi->qos[i].qplist); + } + if (vsi->register_qset) { + vsi->dev->ws_add = irdma_ws_add; + vsi->dev->ws_remove = irdma_ws_remove; + vsi->dev->ws_reset = irdma_ws_reset; + } else { + vsi->dev->ws_add = irdma_null_ws_add; + vsi->dev->ws_remove = irdma_null_ws_remove; + vsi->dev->ws_reset = irdma_null_ws_reset; + } + if (info->dev->privileged) { + reg_addr = info->dev->hw_regs[IRDMA_VSIQF_PE_CTL1] + vsi->vsi_idx; + if (vsi->dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) { + writel(0x1, reg_addr); + } else { + reg_data = readl(reg_addr); + reg_data |= 0x2; + writel(reg_data, reg_addr); + } + } +} + +/** + * irdma_get_fcn_id - Return the function id + * @vsi: pointer to the vsi + */ +static u8 irdma_get_fcn_id(struct irdma_sc_vsi *vsi) +{ + struct irdma_stats_inst_info stats_info = {}; + struct irdma_sc_dev *dev = vsi->dev; + u8 fcn_id = IRDMA_INVALID_FCN_ID; + u8 start_idx, max_stats, i; + + if (dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) { + if (!irdma_cqp_stats_inst_cmd(vsi, IRDMA_OP_STATS_ALLOCATE, + &stats_info)) + return stats_info.stats_idx; + } + + start_idx = 1; + max_stats = 16; + for (i = start_idx; i < max_stats; i++) + if (!dev->fcn_id_array[i]) { + fcn_id = i; + dev->fcn_id_array[i] = true; + break; + } + + return fcn_id; +} + +/** + * irdma_vsi_stats_init - Initialize the vsi statistics + * @vsi: pointer to the vsi structure + * @info: The info structure used for initialization + */ +enum irdma_status_code irdma_vsi_stats_init(struct irdma_sc_vsi *vsi, + struct irdma_vsi_stats_info *info) +{ + u8 fcn_id = info->fcn_id; + struct irdma_dma_mem *stats_buff_mem; + + vsi->pestat = info->pestat; + vsi->pestat->hw = vsi->dev->hw; + vsi->pestat->vsi = vsi; + stats_buff_mem = &vsi->pestat->gather_info.stats_buff_mem; + stats_buff_mem->size = ALIGN(IRDMA_GATHER_STATS_BUF_SIZE * 2, 1); + stats_buff_mem->va = dma_alloc_coherent(hw_to_dev(vsi->pestat->hw), + stats_buff_mem->size, + &stats_buff_mem->pa, + GFP_KERNEL); + if (!stats_buff_mem->va) + return IRDMA_ERR_NO_MEMORY; + + vsi->pestat->gather_info.gather_stats = stats_buff_mem->va; + vsi->pestat->gather_info.last_gather_stats = + (void *)((uintptr_t)stats_buff_mem->va + + IRDMA_GATHER_STATS_BUF_SIZE); + + if (vsi->dev->privileged) + irdma_hw_stats_start_timer(vsi); + + if (info->alloc_fcn_id) + fcn_id = irdma_get_fcn_id(vsi); + if (fcn_id == IRDMA_INVALID_FCN_ID) + goto stats_error; + + vsi->stats_fcn_id_alloc = info->alloc_fcn_id; + vsi->fcn_id = fcn_id; + if (info->alloc_fcn_id) { + vsi->pestat->gather_info.use_stats_inst = true; + vsi->pestat->gather_info.stats_inst_index = fcn_id; + } + + return 0; + +stats_error: + dma_free_coherent(hw_to_dev(vsi->pestat->hw), stats_buff_mem->size, + stats_buff_mem->va, stats_buff_mem->pa); + stats_buff_mem->va = NULL; + + return IRDMA_ERR_CQP_COMPL_ERROR; +} + +/** + * irdma_vsi_stats_free - Free the vsi stats + * @vsi: pointer to the vsi structure + */ +void irdma_vsi_stats_free(struct irdma_sc_vsi *vsi) +{ + struct irdma_stats_inst_info stats_info = {}; + u8 fcn_id = vsi->fcn_id; + struct irdma_sc_dev *dev = vsi->dev; + + if (dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) { + if (vsi->stats_fcn_id_alloc) { + stats_info.stats_idx = vsi->fcn_id; + irdma_cqp_stats_inst_cmd(vsi, IRDMA_OP_STATS_FREE, + &stats_info); + } + } else { + if (vsi->stats_fcn_id_alloc && + fcn_id < vsi->dev->hw_attrs.max_stat_inst) + vsi->dev->fcn_id_array[fcn_id] = false; + } + + if (!vsi->pestat) + return; + if (vsi->dev->privileged) + irdma_hw_stats_stop_timer(vsi); + dma_free_coherent(hw_to_dev(vsi->pestat->hw), + vsi->pestat->gather_info.stats_buff_mem.size, + vsi->pestat->gather_info.stats_buff_mem.va, + vsi->pestat->gather_info.stats_buff_mem.pa); + vsi->pestat->gather_info.stats_buff_mem.va = NULL; +} + +/** + * irdma_get_encoded_wqe_size - given wq size, returns hardware encoded size + * @wqsize: size of the wq (sq, rq) to encoded_size + * @cqpsq: encoded size for sq for cqp as its encoded size is 1+ other wq's + */ +u8 irdma_get_encoded_wqe_size(u32 wqsize, bool cqpsq) +{ + u8 encoded_size = 0; + + /* cqp sq's hw coded value starts from 1 for size of 4 + * while it starts from 0 for qp' wq's. + */ + if (cqpsq) + encoded_size = 1; + wqsize >>= 2; + while (wqsize >>= 1) + encoded_size++; + + return encoded_size; +} + +/** + * irdma_sc_gather_stats - collect the statistics + * @cqp: struct for cqp hw + * @info: gather stats info structure + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_gather_stats(struct irdma_sc_cqp *cqp, + struct irdma_stats_gather_info *info, u64 scratch) +{ + __le64 *wqe; + u64 temp; + + if (info->stats_buff_mem.size < IRDMA_GATHER_STATS_BUF_SIZE) + return IRDMA_ERR_BUF_TOO_SHORT; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 40, + LS_64(info->hmc_fcn_index, IRDMA_CQPSQ_STATS_HMC_FCN_INDEX)); + set_64bit_val(wqe, 32, info->stats_buff_mem.pa); + + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_STATS_WQEVALID) | + LS_64(info->use_stats_inst, IRDMA_CQPSQ_STATS_USE_INST) | + LS_64(info->stats_inst_index, IRDMA_CQPSQ_STATS_INST_INDEX) | + LS_64(info->use_hmc_fcn_index, + IRDMA_CQPSQ_STATS_USE_HMC_FCN_INDEX) | + LS_64(IRDMA_CQP_OP_GATHER_STATS, IRDMA_CQPSQ_STATS_OP); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_STATS, "GATHER_STATS WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + writel(IRDMA_RING_CURRENT_HEAD(cqp->sq_ring), + cqp->dev->hw_regs[IRDMA_CQPDB]); + dev_dbg(rfdev_to_dev(cqp->dev), + "STATS: CQP SQ head 0x%x tail 0x%x size 0x%x\n", + cqp->sq_ring.head, cqp->sq_ring.tail, cqp->sq_ring.size); + + return 0; +} + +/** + * irdma_sc_manage_stats_inst - allocate or free stats instance + * @cqp: struct for cqp hw + * @info: stats info structure + * @alloc: alloc vs. delete flag + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_manage_stats_inst(struct irdma_sc_cqp *cqp, + struct irdma_stats_inst_info *info, bool alloc, + u64 scratch) +{ + __le64 *wqe; + u64 temp; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 40, + LS_64(info->hmc_fn_id, IRDMA_CQPSQ_STATS_HMC_FCN_INDEX)); + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_STATS_WQEVALID) | + LS_64(alloc, IRDMA_CQPSQ_STATS_ALLOC_INST) | + LS_64(info->use_hmc_fcn_index, IRDMA_CQPSQ_STATS_USE_HMC_FCN_INDEX) | + LS_64(info->stats_idx, IRDMA_CQPSQ_STATS_INST_INDEX) | + LS_64(IRDMA_CQP_OP_MANAGE_STATS, IRDMA_CQPSQ_STATS_OP); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_STATS WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + + irdma_sc_cqp_post_sq(cqp); + return 0; +} + +/** + * irdma_sc_set_up_mapping - set the up map table + * @cqp: struct for cqp hw + * @info: User priority map info + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_set_up_map(struct irdma_sc_cqp *cqp, + struct irdma_up_info *info, + u64 scratch) +{ + __le64 *wqe; + u64 temp; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + temp = info->map[0] | LS_64_1(info->map[1], 8) | + LS_64_1(info->map[2], 16) | LS_64_1(info->map[3], 24) | + LS_64_1(info->map[4], 32) | LS_64_1(info->map[5], 40) | + LS_64_1(info->map[6], 48) | LS_64_1(info->map[7], 56); + + set_64bit_val(wqe, 0, temp); + set_64bit_val(wqe, 40, + LS_64(info->cnp_up_override, IRDMA_CQPSQ_UP_CNPOVERRIDE) | + LS_64(info->hmc_fcn_idx, IRDMA_CQPSQ_UP_HMCFCNIDX)); + + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_UP_WQEVALID) | + LS_64(info->use_vlan, IRDMA_CQPSQ_UP_USEVLAN) | + LS_64(info->use_cnp_up_override, IRDMA_CQPSQ_UP_USEOVERRIDE) | + LS_64(IRDMA_CQP_OP_UP_MAP, IRDMA_CQPSQ_UP_OP); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "UPMAP WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_manage_ws_node - create/modify/destroy WS node + * @cqp: struct for cqp hw + * @info: node info structure + * @node_op: 0 for add 1 for modify, 2 for delete + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_manage_ws_node(struct irdma_sc_cqp *cqp, + struct irdma_ws_node_info *info, + enum irdma_ws_node_op node_op, u64 scratch) +{ + __le64 *wqe; + u64 temp = 0; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 32, + LS_64(info->vsi, IRDMA_CQPSQ_WS_VSI) | + LS_64(info->weight, IRDMA_CQPSQ_WS_WEIGHT)); + + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_WS_WQEVALID) | + LS_64(node_op, IRDMA_CQPSQ_WS_NODEOP) | + LS_64(info->enable, IRDMA_CQPSQ_WS_ENABLENODE) | + LS_64(info->type_leaf, IRDMA_CQPSQ_WS_NODETYPE) | + LS_64(info->prio_type, IRDMA_CQPSQ_WS_PRIOTYPE) | + LS_64(info->tc, IRDMA_CQPSQ_WS_TC) | + LS_64(IRDMA_CQP_OP_WORK_SCHED_NODE, IRDMA_CQPSQ_WS_OP) | + LS_64(info->parent_id, IRDMA_CQPSQ_WS_PARENTID) | + LS_64(info->id, IRDMA_CQPSQ_WS_NODEID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_WS WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_qp_flush_wqes - flush qp's wqe + * @qp: sc qp + * @info: dlush information + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_qp_flush_wqes(struct irdma_sc_qp *qp, struct irdma_qp_flush_info *info, + u64 scratch, bool post_sq) +{ + u64 temp = 0; + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + bool flush_sq = false, flush_rq = false; + + if (info->rq && !qp->flush_rq) + flush_rq = true; + if (info->sq && !qp->flush_sq) + flush_sq = true; + qp->flush_sq |= flush_sq; + qp->flush_rq |= flush_rq; + + if (!flush_sq && !flush_rq) { + dev_dbg(rfdev_to_dev(qp->dev), + "CQP: Additional flush request ignored\n"); + return 0; + } + + cqp = qp->pd->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + if (info->userflushcode) { + if (flush_rq) + temp |= LS_64(info->rq_minor_code, IRDMA_CQPSQ_FWQE_RQMNERR) | + LS_64(info->rq_major_code, IRDMA_CQPSQ_FWQE_RQMJERR); + if (flush_sq) + temp |= LS_64(info->sq_minor_code, IRDMA_CQPSQ_FWQE_SQMNERR) | + LS_64(info->sq_major_code, IRDMA_CQPSQ_FWQE_SQMJERR); + } + set_64bit_val(wqe, 16, temp); + + temp = (info->generate_ae) ? + info->ae_code | LS_64(info->ae_src, IRDMA_CQPSQ_FWQE_AESOURCE) : 0; + set_64bit_val(wqe, 8, temp); + + hdr = qp->qp_uk.qp_id | + LS_64(IRDMA_CQP_OP_FLUSH_WQES, IRDMA_CQPSQ_OPCODE) | + LS_64(info->generate_ae, IRDMA_CQPSQ_FWQE_GENERATE_AE) | + LS_64(info->userflushcode, IRDMA_CQPSQ_FWQE_USERFLCODE) | + LS_64(flush_sq, IRDMA_CQPSQ_FWQE_FLUSHSQ) | + LS_64(flush_rq, IRDMA_CQPSQ_FWQE_FLUSHRQ) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QP_FLUSH WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_gen_ae - generate AE, uses flush WQE CQP OP + * @qp: sc qp + * @info: gen ae information + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_gen_ae(struct irdma_sc_qp *qp, + struct irdma_gen_ae_info *info, + u64 scratch, bool post_sq) +{ + u64 temp; + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + + cqp = qp->pd->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + temp = info->ae_code | LS_64(info->ae_src, IRDMA_CQPSQ_FWQE_AESOURCE); + set_64bit_val(wqe, 8, temp); + + hdr = qp->qp_uk.qp_id | LS_64(IRDMA_CQP_OP_GEN_AE, IRDMA_CQPSQ_OPCODE) | + LS_64(1, IRDMA_CQPSQ_FWQE_GENERATE_AE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "GEN_AE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/*** irdma_sc_qp_upload_context - upload qp's context + * @dev: sc device struct + * @info: upload context info ptr for return + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_qp_upload_context(struct irdma_sc_dev *dev, + struct irdma_upload_context_info *info, u64 scratch, + bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, info->buf_pa); + + hdr = LS_64(info->qp_id, IRDMA_CQPSQ_UCTX_QPID) | + LS_64(IRDMA_CQP_OP_UPLOAD_CONTEXT, IRDMA_CQPSQ_OPCODE) | + LS_64(info->qp_type, IRDMA_CQPSQ_UCTX_QPTYPE) | + LS_64(info->raw_format, IRDMA_CQPSQ_UCTX_RAWFORMAT) | + LS_64(info->freeze_qp, IRDMA_CQPSQ_UCTX_FREEZEQP) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "QP_UPLOAD_CTX WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_manage_push_page - Handle push page + * @cqp: struct for cqp hw + * @info: push page info + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_manage_push_page(struct irdma_sc_cqp *cqp, + struct irdma_cqp_manage_push_page_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + if (info->push_idx >= cqp->dev->hw_attrs.max_hw_device_pages) + return IRDMA_ERR_INVALID_PUSH_PAGE_INDEX; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, info->qs_handle); + hdr = LS_64(info->push_idx, IRDMA_CQPSQ_MPP_PPIDX) | + LS_64(info->push_page_type, IRDMA_CQPSQ_MPP_PPTYPE) | + LS_64(IRDMA_CQP_OP_MANAGE_PUSH_PAGES, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID) | + LS_64(info->free_page, IRDMA_CQPSQ_MPP_FREE_PAGE); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_PUSH_PAGES WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_suspend_qp - suspend qp for param change + * @cqp: struct for cqp hw + * @qp: sc qp struct + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_suspend_qp(struct irdma_sc_cqp *cqp, + struct irdma_sc_qp *qp, + u64 scratch) +{ + u64 hdr; + __le64 *wqe; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + hdr = LS_64(qp->qp_uk.qp_id, IRDMA_CQPSQ_SUSPENDQP_QPID) | + LS_64(IRDMA_CQP_OP_SUSPEND_QP, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "SUSPEND_QP WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_resume_qp - resume qp after suspend + * @cqp: struct for cqp hw + * @qp: sc qp struct + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_resume_qp(struct irdma_sc_cqp *cqp, + struct irdma_sc_qp *qp, + u64 scratch) +{ + u64 hdr; + __le64 *wqe; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, + LS_64(qp->qs_handle, IRDMA_CQPSQ_RESUMEQP_QSHANDLE)); + + hdr = LS_64(qp->qp_uk.qp_id, IRDMA_CQPSQ_RESUMEQP_QPID) | + LS_64(IRDMA_CQP_OP_RESUME_QP, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "RESUME_QP WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_cq_ack - acknowledge completion q + * @cq: cq struct + */ +static void irdma_sc_cq_ack(struct irdma_sc_cq *cq) +{ + writel(cq->cq_uk.cq_id, cq->cq_uk.cq_ack_db); +} + +/** + * irdma_sc_cq_init - initialize completion q + * @cq: cq struct + * @info: cq initialization info + */ +static enum irdma_status_code irdma_sc_cq_init(struct irdma_sc_cq *cq, + struct irdma_cq_init_info *info) +{ + enum irdma_status_code ret_code; + u32 pble_obj_cnt; + + if (info->cq_uk_init_info.cq_size < info->dev->hw_attrs.uk_attrs.min_hw_cq_size || + info->cq_uk_init_info.cq_size > info->dev->hw_attrs.uk_attrs.max_hw_cq_size) + return IRDMA_ERR_INVALID_SIZE; + + pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + if (info->virtual_map && info->first_pm_pbl_idx >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + cq->cq_pa = info->cq_base_pa; + cq->dev = info->dev; + cq->ceq_id = info->ceq_id; + info->cq_uk_init_info.cqe_alloc_db = cq->dev->cq_arm_db; + info->cq_uk_init_info.cq_ack_db = cq->dev->cq_ack_db; + ret_code = irdma_cq_uk_init(&cq->cq_uk, &info->cq_uk_init_info); + if (ret_code) + return ret_code; + + cq->virtual_map = info->virtual_map; + cq->pbl_chunk_size = info->pbl_chunk_size; + cq->ceqe_mask = info->ceqe_mask; + cq->cq_type = (info->type) ? info->type : IRDMA_CQ_TYPE_IWARP; + cq->shadow_area_pa = info->shadow_area_pa; + cq->shadow_read_threshold = info->shadow_read_threshold; + cq->ceq_id_valid = info->ceq_id_valid; + cq->tph_en = info->tph_en; + cq->tph_val = info->tph_val; + cq->first_pm_pbl_idx = info->first_pm_pbl_idx; + cq->vsi = info->vsi; + + return 0; +} + +/** + * irdma_sc_cq_create - create completion q + * @cq: cq struct + * @scratch: u64 saved to be used during cqp completion + * @check_overflow: flag for overflow check + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_cq_create(struct irdma_sc_cq *cq, + u64 scratch, + bool check_overflow, + bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + struct irdma_sc_ceq *ceq; + enum irdma_status_code ret_code = 0; + + cqp = cq->dev->cqp; + if (cq->cq_uk.cq_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt - 1)) + return IRDMA_ERR_INVALID_CQ_ID; + + if (cq->ceq_id > (cq->dev->hmc_fpm_misc.max_ceqs - 1)) + return IRDMA_ERR_INVALID_CEQ_ID; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + ceq = cq->dev->ceq[cq->ceq_id]; + if (ceq && ceq->reg_cq) + ret_code = irdma_sc_add_cq_ctx(ceq, cq); + + if (ret_code) + return ret_code; + + set_64bit_val(wqe, 0, cq->cq_uk.cq_size); + set_64bit_val(wqe, 8, RS_64_1(cq, 1)); + set_64bit_val(wqe, 16, + LS_64(cq->shadow_read_threshold, + IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD)); + set_64bit_val(wqe, 32, (cq->virtual_map ? 0 : cq->cq_pa)); + set_64bit_val(wqe, 40, cq->shadow_area_pa); + set_64bit_val(wqe, 48, + LS_64((cq->virtual_map ? cq->first_pm_pbl_idx : 0), + IRDMA_CQPSQ_CQ_FIRSTPMPBLIDX)); + set_64bit_val(wqe, 56, + LS_64(cq->tph_val, IRDMA_CQPSQ_TPHVAL) | + LS_64(cq->vsi->vsi_idx, IRDMA_CQPSQ_VSIIDX)); + + hdr = FLD_LS_64(cq->dev, cq->cq_uk.cq_id, IRDMA_CQPSQ_CQ_CQID) | + FLD_LS_64(cq->dev, (cq->ceq_id_valid ? cq->ceq_id : 0), + IRDMA_CQPSQ_CQ_CEQID) | + LS_64(IRDMA_CQP_OP_CREATE_CQ, IRDMA_CQPSQ_OPCODE) | + LS_64(cq->pbl_chunk_size, IRDMA_CQPSQ_CQ_LPBLSIZE) | + LS_64(check_overflow, IRDMA_CQPSQ_CQ_CHKOVERFLOW) | + LS_64(cq->virtual_map, IRDMA_CQPSQ_CQ_VIRTMAP) | + LS_64(cq->ceqe_mask, IRDMA_CQPSQ_CQ_ENCEQEMASK) | + LS_64(cq->ceq_id_valid, IRDMA_CQPSQ_CQ_CEQIDVALID) | + LS_64(cq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(cq->cq_uk.avoid_mem_cflct, IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CQ_CREATE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_cq_destroy - destroy completion q + * @cq: cq struct + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_cq_destroy(struct irdma_sc_cq *cq, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + struct irdma_sc_ceq *ceq; + + cqp = cq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + ceq = cq->dev->ceq[cq->ceq_id]; + if (ceq && ceq->reg_cq) + irdma_sc_remove_cq_ctx(ceq, cq); + + set_64bit_val(wqe, 0, cq->cq_uk.cq_size); + set_64bit_val(wqe, 8, RS_64_1(cq, 1)); + set_64bit_val(wqe, 40, cq->shadow_area_pa); + set_64bit_val(wqe, 48, + (cq->virtual_map ? cq->first_pm_pbl_idx : 0)); + + hdr = cq->cq_uk.cq_id | + FLD_LS_64(cq->dev, (cq->ceq_id_valid ? cq->ceq_id : 0), + IRDMA_CQPSQ_CQ_CEQID) | + LS_64(IRDMA_CQP_OP_DESTROY_CQ, IRDMA_CQPSQ_OPCODE) | + LS_64(cq->pbl_chunk_size, IRDMA_CQPSQ_CQ_LPBLSIZE) | + LS_64(cq->virtual_map, IRDMA_CQPSQ_CQ_VIRTMAP) | + LS_64(cq->ceqe_mask, IRDMA_CQPSQ_CQ_ENCEQEMASK) | + LS_64(cq->ceq_id_valid, IRDMA_CQPSQ_CQ_CEQIDVALID) | + LS_64(cq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(cq->cq_uk.avoid_mem_cflct, IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CQ_DESTROY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_cq_resize - set resized cq buffer info + * @cq: resized cq + * @info: resized cq buffer info + */ +static void irdma_sc_cq_resize(struct irdma_sc_cq *cq, struct irdma_modify_cq_info *info) +{ + cq->virtual_map = info->virtual_map; + cq->cq_pa = info->cq_pa; + cq->first_pm_pbl_idx = info->first_pm_pbl_idx; + cq->pbl_chunk_size = info->pbl_chunk_size; + cq->cq_uk.ops.iw_cq_resize(&cq->cq_uk, info->cq_base, info->cq_size); +} + +/** + * irdma_sc_cq_modify - modify a Completion Queue + * @cq: cq struct + * @info: modification info struct + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag to post to sq + */ +static enum irdma_status_code +irdma_sc_cq_modify(struct irdma_sc_cq *cq, struct irdma_modify_cq_info *info, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + u32 pble_obj_cnt; + + if (info->ceq_valid && + info->ceq_id > (cq->dev->hmc_fpm_misc.max_ceqs - 1)) + return IRDMA_ERR_INVALID_CEQ_ID; + + pble_obj_cnt = cq->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + if (info->cq_resize && info->virtual_map && + info->first_pm_pbl_idx >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + cqp = cq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 0, info->cq_size); + set_64bit_val(wqe, 8, RS_64_1(cq, 1)); + set_64bit_val(wqe, 16, + LS_64(info->shadow_read_threshold, + IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD)); + set_64bit_val(wqe, 32, info->cq_pa); + set_64bit_val(wqe, 40, cq->shadow_area_pa); + set_64bit_val(wqe, 48, info->first_pm_pbl_idx); + set_64bit_val(wqe, 56, + LS_64(cq->tph_val, IRDMA_CQPSQ_TPHVAL) | + LS_64(cq->vsi->vsi_idx, IRDMA_CQPSQ_VSIIDX)); + + hdr = cq->cq_uk.cq_id | + FLD_LS_64(cq->dev, (info->ceq_valid ? cq->ceq_id : 0), + IRDMA_CQPSQ_CQ_CEQID) | + LS_64(IRDMA_CQP_OP_MODIFY_CQ, IRDMA_CQPSQ_OPCODE) | + LS_64(info->cq_resize, IRDMA_CQPSQ_CQ_CQRESIZE) | + LS_64(info->pbl_chunk_size, IRDMA_CQPSQ_CQ_LPBLSIZE) | + LS_64(info->check_overflow, IRDMA_CQPSQ_CQ_CHKOVERFLOW) | + LS_64(info->virtual_map, IRDMA_CQPSQ_CQ_VIRTMAP) | + LS_64(cq->ceqe_mask, IRDMA_CQPSQ_CQ_ENCEQEMASK) | + LS_64(info->ceq_valid, IRDMA_CQPSQ_CQ_CEQIDVALID) | + LS_64(cq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(cq->cq_uk.avoid_mem_cflct, IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CQ_MODIFY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_check_cqp_progress - check cqp processing progress + * @timeout: timeout info struct + * @dev: sc device struct + */ +static void irdma_check_cqp_progress(struct irdma_cqp_timeout *timeout, + struct irdma_sc_dev *dev) +{ + if (timeout->compl_cqp_cmds != dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]) { + timeout->compl_cqp_cmds = dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]; + timeout->count = 0; + } else { + if (dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS] != + timeout->compl_cqp_cmds) + timeout->count++; + } +} + +/** + * irdma_get_cqp_reg_info - get head and tail for cqp using registers + * @cqp: struct for cqp hw + * @val: cqp tail register value + * @tail: wqtail register value + * @error: cqp processing err + */ +static inline void irdma_get_cqp_reg_info(struct irdma_sc_cqp *cqp, u32 *val, + u32 *tail, u32 *error) +{ + *val = readl(cqp->dev->hw_regs[IRDMA_CQPTAIL]); + *tail = RS_32(*val, IRDMA_CQPTAIL_WQTAIL); + *error = RS_32(*val, IRDMA_CQPTAIL_CQP_OP_ERR); +} + +/** + * irdma_cqp_poll_registers - poll cqp registers + * @cqp: struct for cqp hw + * @tail: wqtail register value + * @count: how many times to try for completion + */ +static enum irdma_status_code irdma_cqp_poll_registers(struct irdma_sc_cqp *cqp, + u32 tail, u32 count) +{ + u32 i = 0; + u32 newtail, error, val; + + while (i++ < count) { + irdma_get_cqp_reg_info(cqp, &val, &newtail, &error); + if (error) { + error = readl(cqp->dev->hw_regs[IRDMA_CQPERRCODES]); + dev_dbg(rfdev_to_dev(cqp->dev), + "CQP: CQPERRCODES error_code[x%08X]\n", error); + return IRDMA_ERR_CQP_COMPL_ERROR; + } + if (newtail != tail) { + /* SUCCESS */ + IRDMA_RING_MOVE_TAIL(cqp->sq_ring); + cqp->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]++; + return 0; + } + udelay(cqp->dev->hw_attrs.max_sleep_count); + } + + return IRDMA_ERR_TIMEOUT; +} + +/** + * irdma_sc_decode_fpm_commit - decode a 64 bit value into count and base + * @buf: pointer to commit buffer + * @buf_idx: buffer index + * @obj_info: object info pointer + * @rsrc_idx: indexs of memory resource + */ +static u64 irdma_sc_decode_fpm_commit(__le64 *buf, u32 buf_idx, + struct irdma_hmc_obj_info *obj_info, + u32 rsrc_idx) +{ + u64 temp; + + get_64bit_val(buf, buf_idx, &temp); + + switch (rsrc_idx) { + case IRDMA_HMC_IW_QP: + obj_info[rsrc_idx].cnt = (u32)RS_64(temp, IRDMA_COMMIT_FPM_QPCNT); + break; + case IRDMA_HMC_IW_CQ: + obj_info[rsrc_idx].cnt = (u32)RS_64(temp, IRDMA_COMMIT_FPM_CQCNT); + break; + case IRDMA_HMC_IW_APBVT_ENTRY: + obj_info[rsrc_idx].cnt = 1; + break; + default: + obj_info[rsrc_idx].cnt = (u32)temp; + break; + } + + obj_info[rsrc_idx].base = (u64)RS_64_1(temp, IRDMA_COMMIT_FPM_BASE_S) * 512; + + return temp; +} + +/** + * irdma_sc_parse_fpm_commit_buf - parse fpm commit buffer + * @dev: pointer to dev struct + * @buf: ptr to fpm commit buffer + * @info: ptr to irdma_hmc_obj_info struct + * @sd: number of SDs for HMC objects + * + * parses fpm commit info and copy base value + * of hmc objects in hmc_info + */ +static enum irdma_status_code +irdma_sc_parse_fpm_commit_buf(struct irdma_sc_dev *dev, __le64 *buf, + struct irdma_hmc_obj_info *info, u32 *sd) +{ + u64 size; + u32 i; + u64 max_base = 0; + u32 last_hmc_obj = 0; + + irdma_sc_decode_fpm_commit(buf, 0, info, IRDMA_HMC_IW_QP); + irdma_sc_decode_fpm_commit(buf, 8, info, IRDMA_HMC_IW_CQ); + /* skiping RSRVD */ + irdma_sc_decode_fpm_commit(buf, 24, info, IRDMA_HMC_IW_HTE); + irdma_sc_decode_fpm_commit(buf, 32, info, IRDMA_HMC_IW_ARP); + irdma_sc_decode_fpm_commit(buf, 40, info, + IRDMA_HMC_IW_APBVT_ENTRY); + irdma_sc_decode_fpm_commit(buf, 48, info, IRDMA_HMC_IW_MR); + irdma_sc_decode_fpm_commit(buf, 56, info, IRDMA_HMC_IW_XF); + irdma_sc_decode_fpm_commit(buf, 64, info, IRDMA_HMC_IW_XFFL); + irdma_sc_decode_fpm_commit(buf, 72, info, IRDMA_HMC_IW_Q1); + irdma_sc_decode_fpm_commit(buf, 80, info, IRDMA_HMC_IW_Q1FL); + irdma_sc_decode_fpm_commit(buf, 88, info, + IRDMA_HMC_IW_TIMER); + irdma_sc_decode_fpm_commit(buf, 112, info, + IRDMA_HMC_IW_PBLE); + /* skipping RSVD. */ + if (dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) { + irdma_sc_decode_fpm_commit(buf, 96, info, + IRDMA_HMC_IW_FSIMC); + irdma_sc_decode_fpm_commit(buf, 104, info, + IRDMA_HMC_IW_FSIAV); + irdma_sc_decode_fpm_commit(buf, 128, info, + IRDMA_HMC_IW_RRF); + irdma_sc_decode_fpm_commit(buf, 136, info, + IRDMA_HMC_IW_RRFFL); + irdma_sc_decode_fpm_commit(buf, 144, info, + IRDMA_HMC_IW_HDR); + irdma_sc_decode_fpm_commit(buf, 152, info, + IRDMA_HMC_IW_MD); + irdma_sc_decode_fpm_commit(buf, 160, info, + IRDMA_HMC_IW_OOISC); + irdma_sc_decode_fpm_commit(buf, 168, info, + IRDMA_HMC_IW_OOISCFFL); + } + + /* searching for the last object in HMC to find the size of the HMC area. */ + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) { + if (info[i].base > max_base) { + max_base = info[i].base; + last_hmc_obj = i; + } + } + + size = info[last_hmc_obj].cnt * info[last_hmc_obj].size + + info[last_hmc_obj].base; + + if (size & 0x1FFFFF) + *sd = (u32)((size >> 21) + 1); /* add 1 for remainder */ + else + *sd = (u32)(size >> 21); + + return 0; +} + +/** + * irdma_sc_decode_fpm_query() - Decode a 64 bit value into max count and size + * @buf: ptr to fpm query buffer + * @buf_idx: index into buf + * @obj_info: ptr to irdma_hmc_obj_info struct + * @rsrc_idx: resource index into info + * + * Decode a 64 bit value from fpm query buffer into max count and size + */ +static u64 irdma_sc_decode_fpm_query(__le64 *buf, u32 buf_idx, + struct irdma_hmc_obj_info *obj_info, + u32 rsrc_idx) +{ + u64 temp; + u32 size; + + get_64bit_val(buf, buf_idx, &temp); + obj_info[rsrc_idx].max_cnt = (u32)temp; + size = (u32)RS_64_1(temp, 32); + obj_info[rsrc_idx].size = LS_64_1(1, size); + + return temp; +} + +/** + * irdma_sc_parse_fpm_query_buf() - parses fpm query buffer + * @dev: ptr to shared code device + * @buf: ptr to fpm query buffer + * @hmc_info: ptr to irdma_hmc_obj_info struct + * @hmc_fpm_misc: ptr to fpm data + * + * parses fpm query buffer and copy max_cnt and + * size value of hmc objects in hmc_info + */ +static enum irdma_status_code +irdma_sc_parse_fpm_query_buf(struct irdma_sc_dev *dev, __le64 *buf, + struct irdma_hmc_info *hmc_info, + struct irdma_hmc_fpm_misc *hmc_fpm_misc) +{ + struct irdma_hmc_obj_info *obj_info; + u64 temp; + u32 size; + u16 max_pe_sds; + + obj_info = hmc_info->hmc_obj; + + get_64bit_val(buf, 0, &temp); + hmc_info->first_sd_index = (u16)RS_64(temp, IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX); + max_pe_sds = (u16)RS_64(temp, IRDMA_QUERY_FPM_MAX_PE_SDS); + + /* Reduce SD count for VFs by 1 to account + * for PBLE backing page rounding + */ + if (hmc_info->hmc_fn_id >= dev->hw_attrs.first_hw_vf_fpm_id) + max_pe_sds--; + hmc_fpm_misc->max_sds = max_pe_sds; + hmc_info->sd_table.sd_cnt = max_pe_sds + hmc_info->first_sd_index; + get_64bit_val(buf, 8, &temp); + obj_info[IRDMA_HMC_IW_QP].max_cnt = (u32)RS_64(temp, IRDMA_QUERY_FPM_MAX_QPS); + size = (u32)RS_64_1(temp, 32); + obj_info[IRDMA_HMC_IW_QP].size = LS_64_1(1, size); + + get_64bit_val(buf, 16, &temp); + obj_info[IRDMA_HMC_IW_CQ].max_cnt = (u32)RS_64(temp, IRDMA_QUERY_FPM_MAX_CQS); + size = (u32)RS_64_1(temp, 32); + obj_info[IRDMA_HMC_IW_CQ].size = LS_64_1(1, size); + + irdma_sc_decode_fpm_query(buf, 32, obj_info, IRDMA_HMC_IW_HTE); + irdma_sc_decode_fpm_query(buf, 40, obj_info, IRDMA_HMC_IW_ARP); + + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].size = 8192; + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].max_cnt = 1; + + irdma_sc_decode_fpm_query(buf, 48, obj_info, IRDMA_HMC_IW_MR); + irdma_sc_decode_fpm_query(buf, 56, obj_info, IRDMA_HMC_IW_XF); + + get_64bit_val(buf, 64, &temp); + obj_info[IRDMA_HMC_IW_XFFL].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_XFFL].size = 4; + hmc_fpm_misc->xf_block_size = RS_64(temp, IRDMA_QUERY_FPM_XFBLOCKSIZE); + if (!hmc_fpm_misc->xf_block_size) + return IRDMA_ERR_INVALID_SIZE; + + irdma_sc_decode_fpm_query(buf, 72, obj_info, IRDMA_HMC_IW_Q1); + get_64bit_val(buf, 80, &temp); + obj_info[IRDMA_HMC_IW_Q1FL].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_Q1FL].size = 4; + + hmc_fpm_misc->q1_block_size = RS_64(temp, IRDMA_QUERY_FPM_Q1BLOCKSIZE); + if (!hmc_fpm_misc->q1_block_size) + return IRDMA_ERR_INVALID_SIZE; + + irdma_sc_decode_fpm_query(buf, 88, obj_info, IRDMA_HMC_IW_TIMER); + + get_64bit_val(buf, 112, &temp); + obj_info[IRDMA_HMC_IW_PBLE].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_PBLE].size = 8; + + get_64bit_val(buf, 120, &temp); + hmc_fpm_misc->max_ceqs = RS_64(temp, IRDMA_QUERY_FPM_MAX_CEQS); + hmc_fpm_misc->ht_multiplier = RS_64(temp, IRDMA_QUERY_FPM_HTMULTIPLIER); + hmc_fpm_misc->timer_bucket = RS_64(temp, IRDMA_QUERY_FPM_TIMERBUCKET); + if (dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + return 0; + irdma_sc_decode_fpm_query(buf, 96, obj_info, IRDMA_HMC_IW_FSIMC); + irdma_sc_decode_fpm_query(buf, 104, obj_info, IRDMA_HMC_IW_FSIAV); + irdma_sc_decode_fpm_query(buf, 128, obj_info, IRDMA_HMC_IW_RRF); + + get_64bit_val(buf, 136, &temp); + obj_info[IRDMA_HMC_IW_RRFFL].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_RRFFL].size = 4; + hmc_fpm_misc->rrf_block_size = RS_64(temp, IRDMA_QUERY_FPM_RRFBLOCKSIZE); + if (!hmc_fpm_misc->rrf_block_size && + obj_info[IRDMA_HMC_IW_RRFFL].max_cnt) + return IRDMA_ERR_INVALID_SIZE; + + irdma_sc_decode_fpm_query(buf, 144, obj_info, IRDMA_HMC_IW_HDR); + irdma_sc_decode_fpm_query(buf, 152, obj_info, IRDMA_HMC_IW_MD); + irdma_sc_decode_fpm_query(buf, 160, obj_info, IRDMA_HMC_IW_OOISC); + + get_64bit_val(buf, 168, &temp); + obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_OOISCFFL].size = 4; + hmc_fpm_misc->ooiscf_block_size = RS_64(temp, IRDMA_QUERY_FPM_OOISCFBLOCKSIZE); + if (!hmc_fpm_misc->ooiscf_block_size && + obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt) + return IRDMA_ERR_INVALID_SIZE; + + return 0; +} + +/** + * irdma_sc_find_reg_cq - find cq ctx index + * @ceq: ceq sc structure + * @cq: cq sc structure + */ +static u32 irdma_sc_find_reg_cq(struct irdma_sc_ceq *ceq, + struct irdma_sc_cq *cq) +{ + u32 i; + + for (i = 0; i < ceq->reg_cq_size; i++) { + if (cq == ceq->reg_cq[i]) + return i; + } + + return IRDMA_INVALID_CQ_IDX; +} + +/** + * irdma_sc_add_cq_ctx - add cq ctx tracking for ceq + * @ceq: ceq sc structure + * @cq: cq sc structure + */ +enum irdma_status_code irdma_sc_add_cq_ctx(struct irdma_sc_ceq *ceq, + struct irdma_sc_cq *cq) +{ + unsigned long flags; + + spin_lock_irqsave(&ceq->req_cq_lock, flags); + + if (ceq->reg_cq_size == ceq->elem_cnt) { + spin_unlock_irqrestore(&ceq->req_cq_lock, flags); + return IRDMA_ERR_REG_CQ_FULL; + } + + ceq->reg_cq[ceq->reg_cq_size++] = cq; + + spin_unlock_irqrestore(&ceq->req_cq_lock, flags); + + return 0; +} + +/** + * irdma_sc_remove_cq_ctx - remove cq ctx tracking for ceq + * @ceq: ceq sc structure + * @cq: cq sc structure + */ +void irdma_sc_remove_cq_ctx(struct irdma_sc_ceq *ceq, struct irdma_sc_cq *cq) +{ + unsigned long flags; + u32 cq_ctx_idx; + + spin_lock_irqsave(&ceq->req_cq_lock, flags); + cq_ctx_idx = irdma_sc_find_reg_cq(ceq, cq); + if (cq_ctx_idx == IRDMA_INVALID_CQ_IDX) + goto exit; + + ceq->reg_cq_size--; + if (cq_ctx_idx != ceq->reg_cq_size) + ceq->reg_cq[cq_ctx_idx] = ceq->reg_cq[ceq->reg_cq_size]; + ceq->reg_cq[ceq->reg_cq_size] = NULL; + +exit: + spin_unlock_irqrestore(&ceq->req_cq_lock, flags); +} + +/** + * irdma_sc_cqp_init - Initialize buffers for a control Queue Pair + * @cqp: IWARP control queue pair pointer + * @info: IWARP control queue pair init info pointer + * + * Initializes the object and context buffers for a control Queue Pair. + */ +static enum irdma_status_code +irdma_sc_cqp_init(struct irdma_sc_cqp *cqp, struct irdma_cqp_init_info *info) +{ + u8 hw_sq_size; + + if (info->sq_size > IRDMA_CQP_SW_SQSIZE_2048 || + info->sq_size < IRDMA_CQP_SW_SQSIZE_4 || + ((info->sq_size & (info->sq_size - 1)))) + return IRDMA_ERR_INVALID_SIZE; + + hw_sq_size = irdma_get_encoded_wqe_size(info->sq_size, true); + cqp->size = sizeof(*cqp); + cqp->sq_size = info->sq_size; + cqp->hw_sq_size = hw_sq_size; + cqp->sq_base = info->sq; + cqp->host_ctx = info->host_ctx; + cqp->sq_pa = info->sq_pa; + cqp->host_ctx_pa = info->host_ctx_pa; + cqp->dev = info->dev; + cqp->struct_ver = info->struct_ver; + cqp->hw_maj_ver = info->hw_maj_ver; + cqp->hw_min_ver = info->hw_min_ver; + cqp->scratch_array = info->scratch_array; + cqp->polarity = 0; + cqp->en_datacenter_tcp = info->en_datacenter_tcp; + cqp->ena_vf_count = info->ena_vf_count; + cqp->hmc_profile = info->hmc_profile; + cqp->ceqs_per_vf = info->ceqs_per_vf; + cqp->disable_packed = info->disable_packed; + cqp->rocev2_rto_policy = info->rocev2_rto_policy; + cqp->protocol_used = info->protocol_used; + info->dev->cqp = cqp; + + IRDMA_RING_INIT(cqp->sq_ring, cqp->sq_size); + cqp->dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS] = 0; + cqp->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS] = 0; + /* for the cqp commands backlog. */ + INIT_LIST_HEAD(&cqp->dev->cqp_cmd_head); + + writel(0, cqp->dev->hw_regs[IRDMA_CQPTAIL]); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev <= IRDMA_GEN_2) + writel(0, cqp->dev->hw_regs[IRDMA_CQPDB]); + writel(0, cqp->dev->hw_regs[IRDMA_CCQPSTATUS]); + + dev_dbg(rfdev_to_dev(cqp->dev), + "WQE: sq_size[%04d] hw_sq_size[%04d] sq_base[%p] sq_pa[%pK] cqp[%p] polarity[x%04x]\n", + cqp->sq_size, cqp->hw_sq_size, cqp->sq_base, + (u64 *)(uintptr_t)cqp->sq_pa, cqp, cqp->polarity); + + return 0; +} + +/** + * irdma_sc_cqp_create - create cqp during bringup + * @cqp: struct for cqp hw + * @maj_err: If error, major err number + * @min_err: If error, minor err number + */ +static enum irdma_status_code irdma_sc_cqp_create(struct irdma_sc_cqp *cqp, + u16 *maj_err, u16 *min_err) +{ + u64 temp; + u32 cnt = 0, p1, p2, val = 0, err_code; + enum irdma_status_code ret_code; + + cqp->sdbuf.size = ALIGN(IRDMA_UPDATE_SD_BUFF_SIZE * cqp->sq_size, + IRDMA_SD_BUF_ALIGNMENT); + cqp->sdbuf.va = dma_alloc_coherent(hw_to_dev(cqp->dev->hw), + cqp->sdbuf.size, &cqp->sdbuf.pa, + GFP_KERNEL); + if (!cqp->sdbuf.va) + return IRDMA_ERR_NO_MEMORY; + + temp = LS_64(cqp->hw_sq_size, IRDMA_CQPHC_SQSIZE) | + LS_64(cqp->struct_ver, IRDMA_CQPHC_SVER) | + LS_64(cqp->rocev2_rto_policy, IRDMA_CQPHC_ROCEV2_RTO_POLICY) | + LS_64(cqp->protocol_used, IRDMA_CQPHC_PROTOCOL_USED) | + LS_64(cqp->disable_packed, IRDMA_CQPHC_DISABLE_PFPDUS) | + LS_64(cqp->ceqs_per_vf, IRDMA_CQPHC_CEQPERVF); + set_64bit_val(cqp->host_ctx, 0, temp); + set_64bit_val(cqp->host_ctx, 8, cqp->sq_pa); + + temp = LS_64(cqp->ena_vf_count, IRDMA_CQPHC_ENABLED_VFS) | + LS_64(cqp->hmc_profile, IRDMA_CQPHC_HMC_PROFILE); + set_64bit_val(cqp->host_ctx, 16, temp); + set_64bit_val(cqp->host_ctx, 24, (uintptr_t)cqp); + set_64bit_val(cqp->host_ctx, 32, 0); + temp = LS_64(cqp->hw_maj_ver, IRDMA_CQPHC_HW_MAJVER) | + LS_64(cqp->hw_min_ver, IRDMA_CQPHC_HW_MINVER); + set_64bit_val(cqp->host_ctx, 32, temp); + set_64bit_val(cqp->host_ctx, 40, 0); + set_64bit_val(cqp->host_ctx, 48, 0); + set_64bit_val(cqp->host_ctx, 56, 0); + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CQP_HOST_CTX WQE", + cqp->host_ctx, IRDMA_CQP_CTX_SIZE * 8); + p1 = RS_32_1(cqp->host_ctx_pa, 32); + p2 = (u32)cqp->host_ctx_pa; + + writel(p1, cqp->dev->hw_regs[IRDMA_CCQPHIGH]); + writel(p2, cqp->dev->hw_regs[IRDMA_CCQPLOW]); + + do { + if (cnt++ > cqp->dev->hw_attrs.max_done_count) { + ret_code = IRDMA_ERR_TIMEOUT; + goto err; + } + udelay(cqp->dev->hw_attrs.max_sleep_count); + val = readl(cqp->dev->hw_regs[IRDMA_CCQPSTATUS]); + } while (!val); + + if (FLD_RS_32(cqp->dev, val, IRDMA_CCQPSTATUS_CCQP_ERR)) { + ret_code = IRDMA_ERR_DEVICE_NOT_SUPPORTED; + goto err; + } + + cqp->process_cqp_sds = irdma_update_sds_noccq; + return 0; + +err: + dma_free_coherent(hw_to_dev(cqp->dev->hw), cqp->sdbuf.size, + cqp->sdbuf.va, cqp->sdbuf.pa); + cqp->sdbuf.va = NULL; + err_code = readl(cqp->dev->hw_regs[IRDMA_CQPERRCODES]); + *min_err = RS_32(err_code, IRDMA_CQPERRCODES_CQP_MINOR_CODE); + *maj_err = RS_32(err_code, IRDMA_CQPERRCODES_CQP_MAJOR_CODE); + return ret_code; +} + +/** + * irdma_sc_cqp_post_sq - post of cqp's sq + * @cqp: struct for cqp hw + */ +void irdma_sc_cqp_post_sq(struct irdma_sc_cqp *cqp) +{ + writel(IRDMA_RING_CURRENT_HEAD(cqp->sq_ring), cqp->dev->cqp_db); + + dev_dbg(rfdev_to_dev(cqp->dev), + "WQE: CQP SQ head 0x%x tail 0x%x size 0x%x\n", + cqp->sq_ring.head, cqp->sq_ring.tail, cqp->sq_ring.size); +} + +/** + * irdma_sc_cqp_get_next_send_wqe_idx - get next wqe on cqp sq + * and pass back index + * @cqp: CQP HW structure + * @scratch: private data for CQP WQE + * @wqe_idx: WQE index of CQP SQ + */ +static __le64 *irdma_sc_cqp_get_next_send_wqe_idx(struct irdma_sc_cqp *cqp, + u64 scratch, u32 *wqe_idx) +{ + __le64 *wqe = NULL; + enum irdma_status_code ret_code; + + if (IRDMA_RING_FULL_ERR(cqp->sq_ring)) { + dev_dbg(rfdev_to_dev(cqp->dev), + "WQE: CQP SQ is full, head 0x%x tail 0x%x size 0x%x\n", + cqp->sq_ring.head, cqp->sq_ring.tail, + cqp->sq_ring.size); + return NULL; + } + IRDMA_ATOMIC_RING_MOVE_HEAD(cqp->sq_ring, *wqe_idx, ret_code); + if (ret_code) + return NULL; + + cqp->dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS]++; + if (!*wqe_idx) + cqp->polarity = !cqp->polarity; + wqe = cqp->sq_base[*wqe_idx].elem; + cqp->scratch_array[*wqe_idx] = scratch; + IRDMA_CQP_INIT_WQE(wqe); + + return wqe; +} + +/** + * irdma_sc_cqp_get_next_send_wqe - get next wqe on cqp sq + * @cqp: struct for cqp hw + * @scratch: private data for CQP WQE + */ +__le64 *irdma_sc_cqp_get_next_send_wqe(struct irdma_sc_cqp *cqp, u64 scratch) +{ + u32 wqe_idx; + + return irdma_sc_cqp_get_next_send_wqe_idx(cqp, scratch, &wqe_idx); +} + +/** + * irdma_sc_cqp_destroy - destroy cqp during close + * @cqp: struct for cqp hw + */ +static enum irdma_status_code irdma_sc_cqp_destroy(struct irdma_sc_cqp *cqp) +{ + u32 cnt = 0, val = 1; + enum irdma_status_code ret_code = 0; + + writel(0, cqp->dev->hw_regs[IRDMA_CCQPHIGH]); + writel(0, cqp->dev->hw_regs[IRDMA_CCQPLOW]); + do { + if (cnt++ > cqp->dev->hw_attrs.max_done_count) { + ret_code = IRDMA_ERR_TIMEOUT; + break; + } + udelay(cqp->dev->hw_attrs.max_sleep_count); + val = readl(cqp->dev->hw_regs[IRDMA_CCQPSTATUS]); + } while (FLD_RS_32(cqp->dev, val, IRDMA_CCQPSTATUS_CCQP_DONE)); + + dma_free_coherent(hw_to_dev(cqp->dev->hw), cqp->sdbuf.size, + cqp->sdbuf.va, cqp->sdbuf.pa); + cqp->sdbuf.va = NULL; + + return ret_code; +} + +/** + * irdma_sc_ccq_arm - enable intr for control cq + * @ccq: ccq sc struct + */ +static void irdma_sc_ccq_arm(struct irdma_sc_cq *ccq) +{ + u64 temp_val; + u16 sw_cq_sel; + u8 arm_next_se; + u8 arm_seq_num; + + get_64bit_val(ccq->cq_uk.shadow_area, 32, &temp_val); + sw_cq_sel = (u16)RS_64(temp_val, IRDMA_CQ_DBSA_SW_CQ_SELECT); + arm_next_se = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_NEXT_SE); + arm_seq_num = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_SEQ_NUM); + arm_seq_num++; + temp_val = LS_64(arm_seq_num, IRDMA_CQ_DBSA_ARM_SEQ_NUM) | + LS_64(sw_cq_sel, IRDMA_CQ_DBSA_SW_CQ_SELECT) | + LS_64(arm_next_se, IRDMA_CQ_DBSA_ARM_NEXT_SE) | + LS_64(1, IRDMA_CQ_DBSA_ARM_NEXT); + set_64bit_val(ccq->cq_uk.shadow_area, 32, temp_val); + + dma_wmb(); /* make sure shadow area is updated before arming */ + + writel(ccq->cq_uk.cq_id, ccq->dev->cq_arm_db); +} + +/** + * irdma_sc_ccq_get_cqe_info - get ccq's cq entry + * @ccq: ccq sc struct + * @info: completion q entry to return + */ +static enum irdma_status_code +irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, + struct irdma_ccq_cqe_info *info) +{ + u64 qp_ctx, temp, temp1; + __le64 *cqe; + struct irdma_sc_cqp *cqp; + u32 wqe_idx; + u32 error; + u8 polarity; + enum irdma_status_code ret_code = 0; + + if (ccq->cq_uk.avoid_mem_cflct) + cqe = IRDMA_GET_CURRENT_EXTENDED_CQ_ELEM(&ccq->cq_uk); + else + cqe = IRDMA_GET_CURRENT_CQ_ELEM(&ccq->cq_uk); + + get_64bit_val(cqe, 24, &temp); + polarity = (u8)RS_64(temp, IRDMA_CQ_VALID); + if (polarity != ccq->cq_uk.polarity) + return IRDMA_ERR_Q_EMPTY; + + get_64bit_val(cqe, 8, &qp_ctx); + cqp = (struct irdma_sc_cqp *)(unsigned long)qp_ctx; + info->error = (bool)RS_64(temp, IRDMA_CQ_ERROR); + info->min_err_code = (u16)RS_64(temp, IRDMA_CQ_MINERR); + if (info->error) { + info->maj_err_code = (u16)RS_64(temp, IRDMA_CQ_MAJERR); + info->min_err_code = (u16)RS_64(temp, IRDMA_CQ_MINERR); + error = readl(cqp->dev->hw_regs[IRDMA_CQPERRCODES]); + dev_dbg(rfdev_to_dev(cqp->dev), + "CQP: CQPERRCODES error_code[x%08X]\n", error); + } + wqe_idx = (u32)RS_64(temp, IRDMA_CQ_WQEIDX); + info->scratch = cqp->scratch_array[wqe_idx]; + + get_64bit_val(cqe, 16, &temp1); + info->op_ret_val = (u32)RS_64(temp1, IRDMA_CCQ_OPRETVAL); + get_64bit_val(cqp->sq_base[wqe_idx].elem, 24, &temp1); + info->op_code = (u8)RS_64(temp1, IRDMA_CQPSQ_OPCODE); + info->cqp = cqp; + + /* move the head for cq */ + IRDMA_RING_MOVE_HEAD(ccq->cq_uk.cq_ring, ret_code); + if (!IRDMA_RING_CURRENT_HEAD(ccq->cq_uk.cq_ring)) + ccq->cq_uk.polarity ^= 1; + + /* update cq tail in cq shadow memory also */ + IRDMA_RING_MOVE_TAIL(ccq->cq_uk.cq_ring); + set_64bit_val(ccq->cq_uk.shadow_area, 0, + IRDMA_RING_CURRENT_HEAD(ccq->cq_uk.cq_ring)); + + dma_wmb(); /* make sure shadow area is updated before moving tail */ + + IRDMA_RING_MOVE_TAIL(cqp->sq_ring); + ccq->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]++; + + return ret_code; +} + +/** + * irdma_sc_poll_for_cqp_op_done - Waits for last write to complete in CQP SQ + * @cqp: struct for cqp hw + * @op_code: cqp opcode for completion + * @compl_info: completion q entry to return + */ +static enum irdma_status_code +irdma_sc_poll_for_cqp_op_done(struct irdma_sc_cqp *cqp, u8 op_code, + struct irdma_ccq_cqe_info *compl_info) +{ + struct irdma_ccq_cqe_info info = {}; + struct irdma_sc_cq *ccq; + enum irdma_status_code ret_code = 0; + u32 cnt = 0; + + ccq = cqp->dev->ccq; + while (1) { + if (cnt++ > 100 * cqp->dev->hw_attrs.max_done_count) + return IRDMA_ERR_TIMEOUT; + + if (irdma_sc_ccq_get_cqe_info(ccq, &info)) { + udelay(cqp->dev->hw_attrs.max_sleep_count); + continue; + } + if (info.error) { + ret_code = IRDMA_ERR_CQP_COMPL_ERROR; + break; + } + /* make sure op code matches*/ + if (op_code == info.op_code) + break; + dev_dbg(rfdev_to_dev(cqp->dev), + "WQE: opcode mismatch for my op code 0x%x, returned opcode %x\n", + op_code, info.op_code); + } + + if (compl_info) + memcpy(compl_info, &info, sizeof(*compl_info)); + + return ret_code; +} + +/** + * irdma_sc_manage_hmc_pm_func_table - manage of function table + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @info: info for the manage function table operation + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code +irdma_sc_manage_hmc_pm_func_table(struct irdma_sc_cqp *cqp, + struct irdma_hmc_fcn_info *info, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 0, 0); + set_64bit_val(wqe, 8, 0); + set_64bit_val(wqe, 16, 0); + set_64bit_val(wqe, 32, 0); + set_64bit_val(wqe, 40, 0); + set_64bit_val(wqe, 48, 0); + set_64bit_val(wqe, 56, 0); + + hdr = LS_64(info->vf_id, IRDMA_CQPSQ_MHMC_VFIDX) | + LS_64(IRDMA_CQP_OP_MANAGE_HMC_PM_FUNC_TABLE, + IRDMA_CQPSQ_OPCODE) | + LS_64(info->free_fcn, IRDMA_CQPSQ_MHMC_FREEPMFN) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, + "MANAGE_HMC_PM_FUNC_TABLE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_manage_hmc_pm_func_table_done - wait for cqp wqe completion for function table + * @cqp: struct for cqp hw + */ +static enum irdma_status_code +irdma_sc_manage_hmc_pm_func_table_done(struct irdma_sc_cqp *cqp) +{ + return irdma_sc_poll_for_cqp_op_done(cqp, + IRDMA_CQP_OP_MANAGE_HMC_PM_FUNC_TABLE, + NULL); +} + +/** + * irdma_sc_commit_fpm_values_done - wait for cqp eqe completion for fpm commit + * @cqp: struct for cqp hw + */ +static enum irdma_status_code +irdma_sc_commit_fpm_val_done(struct irdma_sc_cqp *cqp) +{ + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_COMMIT_FPM_VAL, + NULL); +} + +/** + * irdma_sc_commit_fpm_val - cqp wqe for commit fpm values + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @hmc_fn_id: hmc function id + * @commit_fpm_mem: Memory for fpm values + * @post_sq: flag for cqp db to ring + * @wait_type: poll ccq or cqp registers for cqp completion + */ +static enum irdma_status_code +irdma_sc_commit_fpm_val(struct irdma_sc_cqp *cqp, u64 scratch, u8 hmc_fn_id, + struct irdma_dma_mem *commit_fpm_mem, bool post_sq, + u8 wait_type) +{ + __le64 *wqe; + u64 hdr; + u32 tail, val, error; + enum irdma_status_code ret_code = 0; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, hmc_fn_id); + set_64bit_val(wqe, 32, commit_fpm_mem->pa); + + hdr = LS_64(IRDMA_COMMIT_FPM_BUF_SIZE, IRDMA_CQPSQ_BUFSIZE) | + LS_64(IRDMA_CQP_OP_COMMIT_FPM_VAL, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "COMMIT_FPM_VAL WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + + irdma_get_cqp_reg_info(cqp, &val, &tail, &error); + + if (post_sq) { + irdma_sc_cqp_post_sq(cqp); + + if (wait_type == IRDMA_CQP_WAIT_POLL_REGS) + ret_code = irdma_cqp_poll_registers(cqp, tail, + cqp->dev->hw_attrs.max_done_count); + else if (wait_type == IRDMA_CQP_WAIT_POLL_CQ) + ret_code = irdma_sc_commit_fpm_val_done(cqp); + } + + return ret_code; +} + +/** + * irdma_sc_query_fpm_values_done - poll for cqp wqe completion for query fpm + * @cqp: struct for cqp hw + */ +static enum irdma_status_code +irdma_sc_query_fpm_val_done(struct irdma_sc_cqp *cqp) +{ + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_QUERY_FPM_VAL, + NULL); +} + +/** + * irdma_sc_query_fpm_val - cqp wqe query fpm values + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @hmc_fn_id: hmc function id + * @query_fpm_mem: memory for return fpm values + * @post_sq: flag for cqp db to ring + * @wait_type: poll ccq or cqp registers for cqp completion + */ +static enum irdma_status_code +irdma_sc_query_fpm_val(struct irdma_sc_cqp *cqp, u64 scratch, u8 hmc_fn_id, + struct irdma_dma_mem *query_fpm_mem, bool post_sq, + u8 wait_type) +{ + __le64 *wqe; + u64 hdr; + u32 tail, val, error; + enum irdma_status_code ret_code = 0; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, hmc_fn_id); + set_64bit_val(wqe, 32, query_fpm_mem->pa); + + hdr = LS_64(IRDMA_CQP_OP_QUERY_FPM_VAL, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QUERY_FPM WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_get_cqp_reg_info(cqp, &val, &tail, &error); + if (post_sq) { + irdma_sc_cqp_post_sq(cqp); + if (wait_type == IRDMA_CQP_WAIT_POLL_REGS) + ret_code = irdma_cqp_poll_registers(cqp, tail, + cqp->dev->hw_attrs.max_done_count); + else if (wait_type == IRDMA_CQP_WAIT_POLL_CQ) + ret_code = irdma_sc_query_fpm_val_done(cqp); + } + + return ret_code; +} + +/** + * irdma_sc_ceq_init - initialize ceq + * @ceq: ceq sc structure + * @info: ceq initialization info + */ +static enum irdma_status_code +irdma_sc_ceq_init(struct irdma_sc_ceq *ceq, struct irdma_ceq_init_info *info) +{ + u32 pble_obj_cnt; + + if (info->elem_cnt < info->dev->hw_attrs.min_hw_ceq_size || + info->elem_cnt > info->dev->hw_attrs.max_hw_ceq_size) + return IRDMA_ERR_INVALID_SIZE; + + if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1)) + return IRDMA_ERR_INVALID_CEQ_ID; + pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + + if (info->virtual_map && info->first_pm_pbl_idx >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + ceq->size = sizeof(*ceq); + ceq->ceqe_base = (struct irdma_ceqe *)info->ceqe_base; + ceq->ceq_id = info->ceq_id; + ceq->dev = info->dev; + ceq->elem_cnt = info->elem_cnt; + ceq->ceq_elem_pa = info->ceqe_pa; + ceq->virtual_map = info->virtual_map; + ceq->itr_no_expire = info->itr_no_expire; + ceq->reg_cq = info->reg_cq; + ceq->reg_cq_size = 0; + spin_lock_init(&ceq->req_cq_lock); + ceq->pbl_chunk_size = (ceq->virtual_map ? info->pbl_chunk_size : 0); + ceq->first_pm_pbl_idx = (ceq->virtual_map ? info->first_pm_pbl_idx : 0); + ceq->pbl_list = (ceq->virtual_map ? info->pbl_list : NULL); + ceq->tph_en = info->tph_en; + ceq->tph_val = info->tph_val; + ceq->vsi = info->vsi; + ceq->polarity = 1; + IRDMA_RING_INIT(ceq->ceq_ring, ceq->elem_cnt); + ceq->dev->ceq[info->ceq_id] = ceq; + + return 0; +} + +/** + * irdma_sc_ceq_create - create ceq wqe + * @ceq: ceq sc structure + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ + +static enum irdma_status_code irdma_sc_ceq_create(struct irdma_sc_ceq *ceq, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = ceq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + set_64bit_val(wqe, 16, ceq->elem_cnt); + set_64bit_val(wqe, 32, + (ceq->virtual_map ? 0 : ceq->ceq_elem_pa)); + set_64bit_val(wqe, 48, + (ceq->virtual_map ? ceq->first_pm_pbl_idx : 0)); + set_64bit_val(wqe, 56, + LS_64(ceq->tph_val, IRDMA_CQPSQ_TPHVAL) | + LS_64(ceq->vsi->vsi_idx, IRDMA_CQPSQ_VSIIDX)); + hdr = ceq->ceq_id | LS_64(IRDMA_CQP_OP_CREATE_CEQ, IRDMA_CQPSQ_OPCODE) | + LS_64(ceq->pbl_chunk_size, IRDMA_CQPSQ_CEQ_LPBLSIZE) | + LS_64(ceq->virtual_map, IRDMA_CQPSQ_CEQ_VMAP) | + LS_64(ceq->itr_no_expire, IRDMA_CQPSQ_CEQ_ITRNOEXPIRE) | + LS_64(ceq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CEQ_CREATE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_cceq_create_done - poll for control ceq wqe to complete + * @ceq: ceq sc structure + */ +static enum irdma_status_code +irdma_sc_cceq_create_done(struct irdma_sc_ceq *ceq) +{ + struct irdma_sc_cqp *cqp; + + cqp = ceq->dev->cqp; + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_CREATE_CEQ, + NULL); +} + +/** + * irdma_sc_cceq_destroy_done - poll for destroy cceq to complete + * @ceq: ceq sc structure + */ +static enum irdma_status_code +irdma_sc_cceq_destroy_done(struct irdma_sc_ceq *ceq) +{ + struct irdma_sc_cqp *cqp; + + if (ceq->reg_cq) + irdma_sc_remove_cq_ctx(ceq, ceq->dev->ccq); + + cqp = ceq->dev->cqp; + cqp->process_cqp_sds = irdma_update_sds_noccq; + + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_DESTROY_CEQ, + NULL); +} + +/** + * irdma_sc_cceq_create - create cceq + * @ceq: ceq sc structure + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_cceq_create(struct irdma_sc_ceq *ceq, + u64 scratch) +{ + enum irdma_status_code ret_code; + + ceq->dev->ccq->vsi = ceq->vsi; + if (ceq->reg_cq) { + ret_code = irdma_sc_add_cq_ctx(ceq, ceq->dev->ccq); + if (ret_code) + return ret_code; + } + + ret_code = irdma_sc_ceq_create(ceq, scratch, true); + if (!ret_code) + return irdma_sc_cceq_create_done(ceq); + + return ret_code; +} + +/** + * irdma_sc_ceq_destroy - destroy ceq + * @ceq: ceq sc structure + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_ceq_destroy(struct irdma_sc_ceq *ceq, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = ceq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, ceq->elem_cnt); + set_64bit_val(wqe, 48, ceq->first_pm_pbl_idx); + hdr = ceq->ceq_id | + LS_64(IRDMA_CQP_OP_DESTROY_CEQ, IRDMA_CQPSQ_OPCODE) | + LS_64(ceq->pbl_chunk_size, IRDMA_CQPSQ_CEQ_LPBLSIZE) | + LS_64(ceq->virtual_map, IRDMA_CQPSQ_CEQ_VMAP) | + LS_64(ceq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CEQ_DESTROY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_process_ceq - process ceq + * @dev: sc device struct + * @ceq: ceq sc structure + */ +static void *irdma_sc_process_ceq(struct irdma_sc_dev *dev, + struct irdma_sc_ceq *ceq) +{ + u64 temp; + __le64 *ceqe; + struct irdma_sc_cq *cq; + u8 polarity; + u32 cq_idx = 0; + unsigned long flags; + + do { + ceqe = IRDMA_GET_CURRENT_CEQ_ELEM(ceq); + get_64bit_val(ceqe, 0, &temp); + polarity = (u8)RS_64(temp, IRDMA_CEQE_VALID); + if (polarity != ceq->polarity) + return NULL; + + cq = (struct irdma_sc_cq *)(unsigned long)LS_64_1(temp, 1); + if (!cq) + return NULL; + + if (ceq->reg_cq) { + spin_lock_irqsave(&ceq->req_cq_lock, flags); + cq_idx = irdma_sc_find_reg_cq(ceq, cq); + spin_unlock_irqrestore(&ceq->req_cq_lock, flags); + } + + IRDMA_RING_MOVE_TAIL(ceq->ceq_ring); + if (!IRDMA_RING_CURRENT_TAIL(ceq->ceq_ring)) + ceq->polarity ^= 1; + } while (cq_idx == IRDMA_INVALID_CQ_IDX); + + irdma_sc_cq_ack(cq); + + return cq; +} + +/** + * irdma_sc_aeq_init - initialize aeq + * @aeq: aeq structure ptr + * @info: aeq initialization info + */ +static enum irdma_status_code +irdma_sc_aeq_init(struct irdma_sc_aeq *aeq, struct irdma_aeq_init_info *info) +{ + u32 pble_obj_cnt; + + if (info->elem_cnt < info->dev->hw_attrs.min_hw_aeq_size || + info->elem_cnt > info->dev->hw_attrs.max_hw_aeq_size) + return IRDMA_ERR_INVALID_SIZE; + + pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + + if (info->virtual_map && info->first_pm_pbl_idx >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + aeq->size = sizeof(*aeq); + aeq->polarity = 1; + aeq->aeqe_base = (struct irdma_sc_aeqe *)info->aeqe_base; + aeq->dev = info->dev; + aeq->elem_cnt = info->elem_cnt; + aeq->aeq_elem_pa = info->aeq_elem_pa; + IRDMA_RING_INIT(aeq->aeq_ring, aeq->elem_cnt); + aeq->virtual_map = info->virtual_map; + aeq->pbl_list = (aeq->virtual_map ? info->pbl_list : NULL); + aeq->pbl_chunk_size = (aeq->virtual_map ? info->pbl_chunk_size : 0); + aeq->first_pm_pbl_idx = (aeq->virtual_map ? info->first_pm_pbl_idx : 0); + info->dev->aeq = aeq; + + return 0; +} + +/** + * irdma_sc_aeq_create - create aeq + * @aeq: aeq structure ptr + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_aeq_create(struct irdma_sc_aeq *aeq, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + u64 hdr; + + cqp = aeq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + set_64bit_val(wqe, 16, aeq->elem_cnt); + set_64bit_val(wqe, 32, + (aeq->virtual_map ? 0 : aeq->aeq_elem_pa)); + set_64bit_val(wqe, 48, + (aeq->virtual_map ? aeq->first_pm_pbl_idx : 0)); + + hdr = LS_64(IRDMA_CQP_OP_CREATE_AEQ, IRDMA_CQPSQ_OPCODE) | + LS_64(aeq->pbl_chunk_size, IRDMA_CQPSQ_AEQ_LPBLSIZE) | + LS_64(aeq->virtual_map, IRDMA_CQPSQ_AEQ_VMAP) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "AEQ_CREATE WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_aeq_destroy - destroy aeq during close + * @aeq: aeq structure ptr + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_aeq_destroy(struct irdma_sc_aeq *aeq, + u64 scratch, bool post_sq) +{ + __le64 *wqe; + struct irdma_sc_cqp *cqp; + struct irdma_sc_dev *dev; + u64 hdr; + + dev = aeq->dev; + if (dev->privileged) + writel(0, dev->hw_regs[IRDMA_PFINT_AEQCTL]); + + cqp = dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + set_64bit_val(wqe, 16, aeq->elem_cnt); + set_64bit_val(wqe, 48, aeq->first_pm_pbl_idx); + hdr = LS_64(IRDMA_CQP_OP_DESTROY_AEQ, IRDMA_CQPSQ_OPCODE) | + LS_64(aeq->pbl_chunk_size, IRDMA_CQPSQ_AEQ_LPBLSIZE) | + LS_64(aeq->virtual_map, IRDMA_CQPSQ_AEQ_VMAP) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "AEQ_DESTROY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + return 0; +} + +/** + * irdma_sc_get_next_aeqe - get next aeq entry + * @aeq: aeq structure ptr + * @info: aeqe info to be returned + */ +static enum irdma_status_code +irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, struct irdma_aeqe_info *info) +{ + u64 temp, compl_ctx; + __le64 *aeqe; + u16 wqe_idx; + u8 ae_src; + u8 polarity; + + aeqe = IRDMA_GET_CURRENT_AEQ_ELEM(aeq); + get_64bit_val(aeqe, 0, &compl_ctx); + get_64bit_val(aeqe, 8, &temp); + polarity = (u8)RS_64(temp, IRDMA_AEQE_VALID); + + if (aeq->polarity != polarity) + return IRDMA_ERR_Q_EMPTY; + + irdma_debug_buf(aeq->dev, IRDMA_DEBUG_WQE, "AEQ_ENTRY WQE", aeqe, 16); + + ae_src = (u8)RS_64(temp, IRDMA_AEQE_AESRC); + wqe_idx = (u16)RS_64(temp, IRDMA_AEQE_WQDESCIDX); + info->qp_cq_id = (u32)RS_64(temp, IRDMA_AEQE_QPCQID_LOW) | + ((u32)RS_64(temp, IRDMA_AEQE_QPCQID_HI) << 18); + info->ae_id = (u16)RS_64(temp, IRDMA_AEQE_AECODE); + info->tcp_state = (u8)RS_64(temp, IRDMA_AEQE_TCPSTATE); + info->iwarp_state = (u8)RS_64(temp, IRDMA_AEQE_IWSTATE); + info->q2_data_written = (u8)RS_64(temp, IRDMA_AEQE_Q2DATA); + info->aeqe_overflow = (bool)RS_64(temp, IRDMA_AEQE_OVERFLOW); + + switch (info->ae_id) { + case IRDMA_AE_PRIV_OPERATION_DENIED: + case IRDMA_AE_AMP_INVALIDATE_TYPE1_MW: + case IRDMA_AE_AMP_MWBIND_ZERO_BASED_TYPE1_MW: + case IRDMA_AE_AMP_FASTREG_INVALID_PBL_HPS_CFG: + case IRDMA_AE_AMP_FASTREG_PBLE_MISMATCH: + case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: + case IRDMA_AE_UDA_XMIT_BAD_PD: + case IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT: + case IRDMA_AE_BAD_CLOSE: + case IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE: + case IRDMA_AE_RDMA_READ_WHILE_ORD_ZERO: + case IRDMA_AE_STAG_ZERO_INVALID: + case IRDMA_AE_IB_RREQ_AND_Q1_FULL: + case IRDMA_AE_IB_INVALID_REQUEST: + case IRDMA_AE_WQE_UNEXPECTED_OPCODE: + case IRDMA_AE_IB_REMOTE_ACCESS_ERROR: + case IRDMA_AE_IB_REMOTE_OP_ERROR: + case IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION: + case IRDMA_AE_DDP_UBE_INVALID_MO: + case IRDMA_AE_DDP_UBE_INVALID_QN: + case IRDMA_AE_DDP_NO_L_BIT: + case IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION: + case IRDMA_AE_RDMAP_ROE_UNEXPECTED_OPCODE: + case IRDMA_AE_ROE_INVALID_RDMA_READ_REQUEST: + case IRDMA_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP: + case IRDMA_AE_ROCE_RSP_LENGTH_ERROR: + case IRDMA_AE_INVALID_ARP_ENTRY: + case IRDMA_AE_INVALID_TCP_OPTION_RCVD: + case IRDMA_AE_STALE_ARP_ENTRY: + case IRDMA_AE_INVALID_AH_ENTRY: + case IRDMA_AE_LLP_CLOSE_COMPLETE: + case IRDMA_AE_LLP_CONNECTION_RESET: + case IRDMA_AE_LLP_FIN_RECEIVED: + case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: + case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: + case IRDMA_AE_LLP_SYN_RECEIVED: + case IRDMA_AE_LLP_TERMINATE_RECEIVED: + case IRDMA_AE_LLP_TOO_MANY_RETRIES: + case IRDMA_AE_LLP_DOUBT_REACHABILITY: + case IRDMA_AE_LLP_CONNECTION_ESTABLISHED: + case IRDMA_AE_RESET_SENT: + case IRDMA_AE_TERMINATE_SENT: + case IRDMA_AE_RESET_NOT_SENT: + case IRDMA_AE_LCE_QP_CATASTROPHIC: + case IRDMA_AE_QP_SUSPEND_COMPLETE: + case IRDMA_AE_UDA_L4LEN_INVALID: + info->qp = true; + info->compl_ctx = compl_ctx; + ae_src = IRDMA_AE_SOURCE_RSVD; + break; + case IRDMA_AE_LCE_CQ_CATASTROPHIC: + info->cq = true; + info->compl_ctx = LS_64_1(compl_ctx, 1); + ae_src = IRDMA_AE_SOURCE_RSVD; + break; + case IRDMA_AE_ROCE_EMPTY_MCG: + case IRDMA_AE_ROCE_BAD_MC_IP_ADDR: + case IRDMA_AE_ROCE_BAD_MC_QPID: + case IRDMA_AE_MCG_QP_PROTOCOL_MISMATCH: + ae_src = IRDMA_AE_SOURCE_RSVD; + break; + default: + break; + } + + switch (ae_src) { + case IRDMA_AE_SOURCE_RQ: + case IRDMA_AE_SOURCE_RQ_0011: + info->qp = true; + info->wqe_idx = wqe_idx; + info->compl_ctx = compl_ctx; + break; + case IRDMA_AE_SOURCE_CQ: + case IRDMA_AE_SOURCE_CQ_0110: + case IRDMA_AE_SOURCE_CQ_1010: + case IRDMA_AE_SOURCE_CQ_1110: + info->cq = true; + info->compl_ctx = LS_64_1(compl_ctx, 1); + break; + case IRDMA_AE_SOURCE_SQ: + case IRDMA_AE_SOURCE_SQ_0111: + info->qp = true; + info->sq = true; + info->wqe_idx = wqe_idx; + info->compl_ctx = compl_ctx; + break; + case IRDMA_AE_SOURCE_IN_RR_WR: + case IRDMA_AE_SOURCE_IN_RR_WR_1011: + info->qp = true; + info->compl_ctx = compl_ctx; + info->in_rdrsp_wr = true; + break; + case IRDMA_AE_SOURCE_OUT_RR: + case IRDMA_AE_SOURCE_OUT_RR_1111: + info->qp = true; + info->compl_ctx = compl_ctx; + info->out_rdrsp = true; + break; + case IRDMA_AE_SOURCE_RSVD: + /* fallthrough */ + default: + break; + } + + IRDMA_RING_MOVE_TAIL(aeq->aeq_ring); + if (!IRDMA_RING_CURRENT_TAIL(aeq->aeq_ring)) + aeq->polarity ^= 1; + + return 0; +} + +/** + * irdma_sc_repost_aeq_entries - repost completed aeq entries + * @dev: sc device struct + * @count: allocate count + */ +static enum irdma_status_code +irdma_sc_repost_aeq_entries(struct irdma_sc_dev *dev, u32 count) +{ + writel(count, dev->hw_regs[IRDMA_AEQALLOC]); + + return 0; +} + +/** + * irdma_sc_aeq_create_done - create aeq + * @aeq: aeq structure ptr + */ +static enum irdma_status_code irdma_sc_aeq_create_done(struct irdma_sc_aeq *aeq) +{ + struct irdma_sc_cqp *cqp; + + cqp = aeq->dev->cqp; + + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_CREATE_AEQ, + NULL); +} + +/** + * irdma_sc_aeq_destroy_done - destroy of aeq during close + * @aeq: aeq structure ptr + */ +static enum irdma_status_code +irdma_sc_aeq_destroy_done(struct irdma_sc_aeq *aeq) +{ + struct irdma_sc_cqp *cqp; + + cqp = aeq->dev->cqp; + + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_DESTROY_AEQ, + NULL); +} + +/** + * irdma_sc_ccq_init - initialize control cq + * @cq: sc's cq ctruct + * @info: info for control cq initialization + */ +static enum irdma_status_code +irdma_sc_ccq_init(struct irdma_sc_cq *cq, struct irdma_ccq_init_info *info) +{ + u32 pble_obj_cnt; + + if (info->num_elem < info->dev->hw_attrs.uk_attrs.min_hw_cq_size || + info->num_elem > info->dev->hw_attrs.uk_attrs.max_hw_cq_size) + return IRDMA_ERR_INVALID_SIZE; + + if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1)) + return IRDMA_ERR_INVALID_CEQ_ID; + + pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; + + if (info->virtual_map && info->first_pm_pbl_idx >= pble_obj_cnt) + return IRDMA_ERR_INVALID_PBLE_INDEX; + + cq->cq_pa = info->cq_pa; + cq->cq_uk.cq_base = info->cq_base; + cq->shadow_area_pa = info->shadow_area_pa; + cq->cq_uk.shadow_area = info->shadow_area; + cq->shadow_read_threshold = info->shadow_read_threshold; + cq->dev = info->dev; + cq->ceq_id = info->ceq_id; + cq->cq_uk.cq_size = info->num_elem; + cq->cq_type = IRDMA_CQ_TYPE_CQP; + cq->ceqe_mask = info->ceqe_mask; + IRDMA_RING_INIT(cq->cq_uk.cq_ring, info->num_elem); + cq->cq_uk.cq_id = 0; /* control cq is id 0 always */ + cq->ceq_id_valid = info->ceq_id_valid; + cq->tph_en = info->tph_en; + cq->tph_val = info->tph_val; + cq->cq_uk.avoid_mem_cflct = info->avoid_mem_cflct; + cq->pbl_list = info->pbl_list; + cq->virtual_map = info->virtual_map; + cq->pbl_chunk_size = info->pbl_chunk_size; + cq->first_pm_pbl_idx = info->first_pm_pbl_idx; + cq->cq_uk.polarity = true; + cq->vsi = info->vsi; + cq->cq_uk.cq_ack_db = cq->dev->cq_ack_db; + + /* Only applicable to CQs other than CCQ so initialize to zero */ + cq->cq_uk.cqe_alloc_db = NULL; + + info->dev->ccq = cq; + return 0; +} + +/** + * irdma_sc_ccq_create_done - poll cqp for ccq create + * @ccq: ccq sc struct + */ +static enum irdma_status_code irdma_sc_ccq_create_done(struct irdma_sc_cq *ccq) +{ + struct irdma_sc_cqp *cqp; + + cqp = ccq->dev->cqp; + return irdma_sc_poll_for_cqp_op_done(cqp, IRDMA_CQP_OP_CREATE_CQ, NULL); +} + +/** + * irdma_sc_ccq_create - create control cq + * @ccq: ccq sc struct + * @scratch: u64 saved to be used during cqp completion + * @check_overflow: overlow flag for ccq + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_ccq_create(struct irdma_sc_cq *ccq, + u64 scratch, + bool check_overflow, + bool post_sq) +{ + enum irdma_status_code ret_code; + + ret_code = irdma_sc_cq_create(ccq, scratch, check_overflow, post_sq); + if (ret_code) + return ret_code; + + if (post_sq) { + ret_code = irdma_sc_ccq_create_done(ccq); + if (ret_code) + return ret_code; + } + ccq->dev->cqp->process_cqp_sds = irdma_cqp_sds_cmd; + + return 0; +} + +/** + * irdma_sc_ccq_destroy - destroy ccq during close + * @ccq: ccq sc struct + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static enum irdma_status_code irdma_sc_ccq_destroy(struct irdma_sc_cq *ccq, + u64 scratch, bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + enum irdma_status_code ret_code = 0; + u32 tail, val, error; + + cqp = ccq->dev->cqp; + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 0, ccq->cq_uk.cq_size); + set_64bit_val(wqe, 8, RS_64_1(ccq, 1)); + set_64bit_val(wqe, 40, ccq->shadow_area_pa); + + hdr = ccq->cq_uk.cq_id | + LS_64((ccq->ceq_id_valid ? ccq->ceq_id : 0), + IRDMA_CQPSQ_CQ_CEQID) | + LS_64(IRDMA_CQP_OP_DESTROY_CQ, IRDMA_CQPSQ_OPCODE) | + LS_64(ccq->ceqe_mask, IRDMA_CQPSQ_CQ_ENCEQEMASK) | + LS_64(ccq->ceq_id_valid, IRDMA_CQPSQ_CQ_CEQIDVALID) | + LS_64(ccq->tph_en, IRDMA_CQPSQ_TPHEN) | + LS_64(ccq->cq_uk.avoid_mem_cflct, IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "CCQ_DESTROY WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_get_cqp_reg_info(cqp, &val, &tail, &error); + if (error) + return IRDMA_ERR_CQP_COMPL_ERROR; + + if (post_sq) { + irdma_sc_cqp_post_sq(cqp); + ret_code = irdma_cqp_poll_registers(cqp, tail, 1000); + } + + cqp->process_cqp_sds = irdma_update_sds_noccq; + + return ret_code; +} + +/** + * irdma_sc_init_iw_hmc() - queries fpm values using cqp and populates hmc_info + * @dev : ptr to irdma_dev struct + * @hmc_fn_id: hmc function id + */ +enum irdma_status_code irdma_sc_init_iw_hmc(struct irdma_sc_dev *dev, + u8 hmc_fn_id) +{ + struct irdma_hmc_info *hmc_info; + struct irdma_dma_mem query_fpm_mem; + enum irdma_status_code ret_code = 0; + bool poll_registers = true; + u8 wait_type; + + if (hmc_fn_id > dev->hw_attrs.max_hw_vf_fpm_id || + (dev->hmc_fn_id != hmc_fn_id && + hmc_fn_id < dev->hw_attrs.first_hw_vf_fpm_id)) + return IRDMA_ERR_INVALID_HMCFN_ID; + + dev_dbg(rfdev_to_dev(dev), "HMC: hmc_fn_id %u, dev->hmc_fn_id %u\n", + hmc_fn_id, dev->hmc_fn_id); + if (hmc_fn_id == dev->hmc_fn_id) { + hmc_info = dev->hmc_info; + query_fpm_mem.pa = dev->fpm_query_buf_pa; + query_fpm_mem.va = dev->fpm_query_buf; + } else { + dev_dbg(rfdev_to_dev(dev), + "HMC: Bad hmc function id: hmc_fn_id %u, dev->hmc_fn_id %u\n", + hmc_fn_id, dev->hmc_fn_id); + + return IRDMA_ERR_INVALID_HMCFN_ID; + } + hmc_info->hmc_fn_id = hmc_fn_id; + wait_type = poll_registers ? (u8)IRDMA_CQP_WAIT_POLL_REGS : + (u8)IRDMA_CQP_WAIT_POLL_CQ; + + ret_code = irdma_sc_query_fpm_val(dev->cqp, 0, hmc_info->hmc_fn_id, + &query_fpm_mem, true, wait_type); + if (ret_code) + return ret_code; + + /* parse the fpm_query_buf and fill hmc obj info */ + ret_code = irdma_sc_parse_fpm_query_buf(dev, query_fpm_mem.va, hmc_info, + &dev->hmc_fpm_misc); + + irdma_debug_buf(dev, IRDMA_DEBUG_HMC, "QUERY FPM BUFFER", + query_fpm_mem.va, IRDMA_QUERY_FPM_BUF_SIZE); + return ret_code; +} + +/** + * irdma_sc_configure_iw_fpm() - commits hmc obj cnt values using cqp command and + * populates fpm base address in hmc_info + * @dev : ptr to irdma_dev struct + * @hmc_fn_id: hmc function id + */ +static enum irdma_status_code irdma_sc_cfg_iw_fpm(struct irdma_sc_dev *dev, + u8 hmc_fn_id) +{ + struct irdma_hmc_info *hmc_info; + struct irdma_hmc_obj_info *obj_info; + __le64 *buf; + struct irdma_dma_mem commit_fpm_mem; + enum irdma_status_code ret_code = 0; + bool poll_registers = true; + u8 wait_type; + + if (hmc_fn_id > dev->hw_attrs.max_hw_vf_fpm_id || + (dev->hmc_fn_id != hmc_fn_id && + hmc_fn_id < dev->hw_attrs.first_hw_vf_fpm_id)) + return IRDMA_ERR_INVALID_HMCFN_ID; + + if (hmc_fn_id != dev->hmc_fn_id) + return IRDMA_ERR_INVALID_FPM_FUNC_ID; + + hmc_info = dev->hmc_info; + if (!hmc_info) + return IRDMA_ERR_BAD_PTR; + + obj_info = hmc_info->hmc_obj; + buf = dev->fpm_commit_buf; + + set_64bit_val(buf, 0, (u64)obj_info[IRDMA_HMC_IW_QP].cnt); + set_64bit_val(buf, 8, (u64)obj_info[IRDMA_HMC_IW_CQ].cnt); + set_64bit_val(buf, 16, (u64)0); /* RSRVD */ + set_64bit_val(buf, 24, (u64)obj_info[IRDMA_HMC_IW_HTE].cnt); + set_64bit_val(buf, 32, (u64)obj_info[IRDMA_HMC_IW_ARP].cnt); + set_64bit_val(buf, 40, (u64)0); /* RSVD */ + set_64bit_val(buf, 48, (u64)obj_info[IRDMA_HMC_IW_MR].cnt); + set_64bit_val(buf, 56, (u64)obj_info[IRDMA_HMC_IW_XF].cnt); + set_64bit_val(buf, 64, (u64)obj_info[IRDMA_HMC_IW_XFFL].cnt); + set_64bit_val(buf, 72, (u64)obj_info[IRDMA_HMC_IW_Q1].cnt); + set_64bit_val(buf, 80, (u64)obj_info[IRDMA_HMC_IW_Q1FL].cnt); + set_64bit_val(buf, 88, + (u64)obj_info[IRDMA_HMC_IW_TIMER].cnt); + set_64bit_val(buf, 96, + (u64)obj_info[IRDMA_HMC_IW_FSIMC].cnt); + set_64bit_val(buf, 104, + (u64)obj_info[IRDMA_HMC_IW_FSIAV].cnt); + set_64bit_val(buf, 112, + (u64)obj_info[IRDMA_HMC_IW_PBLE].cnt); + set_64bit_val(buf, 120, (u64)0); /* RSVD */ + set_64bit_val(buf, 128, (u64)obj_info[IRDMA_HMC_IW_RRF].cnt); + set_64bit_val(buf, 136, + (u64)obj_info[IRDMA_HMC_IW_RRFFL].cnt); + set_64bit_val(buf, 144, (u64)obj_info[IRDMA_HMC_IW_HDR].cnt); + set_64bit_val(buf, 152, (u64)obj_info[IRDMA_HMC_IW_MD].cnt); + set_64bit_val(buf, 160, + (u64)obj_info[IRDMA_HMC_IW_OOISC].cnt); + set_64bit_val(buf, 168, + (u64)obj_info[IRDMA_HMC_IW_OOISCFFL].cnt); + + commit_fpm_mem.pa = dev->fpm_commit_buf_pa; + commit_fpm_mem.va = dev->fpm_commit_buf; + + wait_type = poll_registers ? (u8)IRDMA_CQP_WAIT_POLL_REGS : + (u8)IRDMA_CQP_WAIT_POLL_CQ; + irdma_debug_buf(dev, IRDMA_DEBUG_HMC, "COMMIT FPM BUFFER", + commit_fpm_mem.va, IRDMA_COMMIT_FPM_BUF_SIZE); + ret_code = irdma_sc_commit_fpm_val(dev->cqp, 0, hmc_info->hmc_fn_id, + &commit_fpm_mem, true, wait_type); + if (!ret_code) + ret_code = irdma_sc_parse_fpm_commit_buf(dev, dev->fpm_commit_buf, + hmc_info->hmc_obj, + &hmc_info->sd_table.sd_cnt); + irdma_debug_buf(dev, IRDMA_DEBUG_HMC, "COMMIT FPM BUFFER", + commit_fpm_mem.va, IRDMA_COMMIT_FPM_BUF_SIZE); + + return ret_code; +} + +/** + * cqp_sds_wqe_fill - fill cqp wqe doe sd + * @cqp: struct for cqp hw + * @info: sd info for wqe + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +cqp_sds_wqe_fill(struct irdma_sc_cqp *cqp, struct irdma_update_sds_info *info, + u64 scratch) +{ + u64 data; + u64 hdr; + __le64 *wqe; + int mem_entries, wqe_entries; + struct irdma_dma_mem *sdbuf = &cqp->sdbuf; + u64 offset; + u32 wqe_idx; + + wqe = irdma_sc_cqp_get_next_send_wqe_idx(cqp, scratch, &wqe_idx); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + IRDMA_CQP_INIT_WQE(wqe); + wqe_entries = (info->cnt > 3) ? 3 : info->cnt; + mem_entries = info->cnt - wqe_entries; + + if (mem_entries) { + offset = wqe_idx * IRDMA_UPDATE_SD_BUFF_SIZE; + memcpy(((char *)sdbuf->va + offset), &info->entry[3], mem_entries << 4); + + data = (u64)sdbuf->pa + offset; + } else { + data = 0; + } + data |= LS_64(info->hmc_fn_id, IRDMA_CQPSQ_UPESD_HMCFNID); + set_64bit_val(wqe, 16, data); + + switch (wqe_entries) { + case 3: + set_64bit_val(wqe, 48, + (LS_64(info->entry[2].cmd, IRDMA_CQPSQ_UPESD_SDCMD) | + LS_64(1, IRDMA_CQPSQ_UPESD_ENTRY_VALID))); + + set_64bit_val(wqe, 56, info->entry[2].data); + /* fallthrough */ + case 2: + set_64bit_val(wqe, 32, + (LS_64(info->entry[1].cmd, IRDMA_CQPSQ_UPESD_SDCMD) | + LS_64(1, IRDMA_CQPSQ_UPESD_ENTRY_VALID))); + + set_64bit_val(wqe, 40, info->entry[1].data); + /* fallthrough */ + case 1: + set_64bit_val(wqe, 0, + LS_64(info->entry[0].cmd, IRDMA_CQPSQ_UPESD_SDCMD)); + + set_64bit_val(wqe, 8, info->entry[0].data); + break; + default: + break; + } + + hdr = LS_64(IRDMA_CQP_OP_UPDATE_PE_SDS, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID) | + LS_64(mem_entries, IRDMA_CQPSQ_UPESD_ENTRY_COUNT); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "UPDATE_PE_SDS WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + + return 0; +} + +/** + * irdma_update_pe_sds - cqp wqe for sd + * @dev: ptr to irdma_dev struct + * @info: sd info for sd's + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_update_pe_sds(struct irdma_sc_dev *dev, + struct irdma_update_sds_info *info, u64 scratch) +{ + struct irdma_sc_cqp *cqp = dev->cqp; + enum irdma_status_code ret_code; + + ret_code = cqp_sds_wqe_fill(cqp, info, scratch); + if (!ret_code) + irdma_sc_cqp_post_sq(cqp); + + return ret_code; +} + +/** + * irdma_update_sds_noccq - update sd before ccq created + * @dev: sc device struct + * @info: sd info for sd's + */ +enum irdma_status_code +irdma_update_sds_noccq(struct irdma_sc_dev *dev, + struct irdma_update_sds_info *info) +{ + u32 error, val, tail; + struct irdma_sc_cqp *cqp = dev->cqp; + enum irdma_status_code ret_code; + + ret_code = cqp_sds_wqe_fill(cqp, info, 0); + if (ret_code) + return ret_code; + + irdma_get_cqp_reg_info(cqp, &val, &tail, &error); + if (error) + return IRDMA_ERR_CQP_COMPL_ERROR; + irdma_sc_cqp_post_sq(cqp); + return irdma_cqp_poll_registers(cqp, tail, + cqp->dev->hw_attrs.max_done_count); +} + +/** + * irdma_sc_static_hmc_pages_allocated - cqp wqe to allocate hmc pages + * @cqp: struct for cqp hw + * @scratch: u64 saved to be used during cqp completion + * @hmc_fn_id: hmc function id + * @post_sq: flag for cqp db to ring + * @poll_registers: flag to poll register for cqp completion + */ +enum irdma_status_code +irdma_sc_static_hmc_pages_allocated(struct irdma_sc_cqp *cqp, u64 scratch, + u8 hmc_fn_id, bool post_sq, + bool poll_registers) +{ + u64 hdr; + __le64 *wqe; + u32 tail, val, error; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 16, + LS_64(hmc_fn_id, IRDMA_SHMC_PAGE_ALLOCATED_HMC_FN_ID)); + + hdr = LS_64(IRDMA_CQP_OP_SHMC_PAGES_ALLOCATED, IRDMA_CQPSQ_OPCODE) | + LS_64(cqp->polarity, IRDMA_CQPSQ_WQEVALID); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "SHMC_PAGES_ALLOCATED WQE", + wqe, IRDMA_CQP_WQE_SIZE * 8); + irdma_get_cqp_reg_info(cqp, &val, &tail, &error); + if (error) + return IRDMA_ERR_CQP_COMPL_ERROR; + + if (post_sq) { + irdma_sc_cqp_post_sq(cqp); + if (poll_registers) + /* check for cqp sq tail update */ + return irdma_cqp_poll_registers(cqp, tail, 1000); + else + return irdma_sc_poll_for_cqp_op_done(cqp, + IRDMA_CQP_OP_SHMC_PAGES_ALLOCATED, + NULL); + } + + return 0; +} + +/** + * irdma_cqp_ring_full - check if cqp ring is full + * @cqp: struct for cqp hw + */ +static bool irdma_cqp_ring_full(struct irdma_sc_cqp *cqp) +{ + return IRDMA_RING_FULL_ERR(cqp->sq_ring); +} + +/** + * irdma_est_sd - returns approximate number of SDs for HMC + * @dev: sc device struct + * @hmc_info: hmc structure, size and count for HMC objects + */ +static u32 irdma_est_sd(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info) +{ + int i; + u64 size = 0; + u64 sd; + + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) + if (i != IRDMA_HMC_IW_PBLE) + size += round_up(hmc_info->hmc_obj[i].cnt * + hmc_info->hmc_obj[i].size, 512); + if (dev->privileged) + size += round_up(hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt * + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].size, 512); + if (size & 0x1FFFFF) + sd = (size >> 21) + 1; /* add 1 for remainder */ + else + sd = size >> 21; + if (!dev->privileged) { + /* 2MB alignment for VF PBLE HMC */ + size = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt * + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].size; + if (size & 0x1FFFFF) + sd += (size >> 21) + 1; /* add 1 for remainder */ + else + sd += size >> 21; + } + if (sd > 0xFFFFFFFF) { + dev_dbg(rfdev_to_dev(dev), "HMC: sd overflow[%lld]\n", sd); + sd = 0xFFFFFFFF - 1; + } + + return (u32)sd; +} + +/** + * irdma_sc_query_rdma_features_done - poll cqp for query features done + * @cqp: struct for cqp hw + */ +static enum irdma_status_code +irdma_sc_query_rdma_features_done(struct irdma_sc_cqp *cqp) +{ + return irdma_sc_poll_for_cqp_op_done(cqp, + IRDMA_CQP_OP_QUERY_RDMA_FEATURES, + NULL); +} + +/** + * irdma_sc_query_rdma_features - query RDMA features and FW ver + * @cqp: struct for cqp hw + * @buf: buffer to hold query info + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_query_rdma_features(struct irdma_sc_cqp *cqp, + struct irdma_dma_mem *buf, u64 scratch) +{ + __le64 *wqe; + u64 temp; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + temp = buf->pa; + set_64bit_val(wqe, 32, temp); + + temp = LS_64(cqp->polarity, IRDMA_CQPSQ_QUERY_RDMA_FEATURES_WQEVALID) | + LS_64(buf->size, IRDMA_CQPSQ_QUERY_RDMA_FEATURES_BUF_LEN) | + LS_64(IRDMA_CQP_OP_QUERY_RDMA_FEATURES, IRDMA_CQPSQ_UP_OP); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, temp); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "QUERY RDMA FEATURES", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_get_rdma_features - get RDMA features + * @dev: sc device struct + */ +enum irdma_status_code irdma_get_rdma_features(struct irdma_sc_dev *dev) +{ + enum irdma_status_code ret_code; + struct irdma_dma_mem feat_buf; + u64 temp; + u16 byte_idx, feat_type, feat_cnt, feat_idx; + + feat_buf.size = ALIGN(IRDMA_FEATURE_BUF_SIZE, + IRDMA_FEATURE_BUF_ALIGNMENT); + feat_buf.va = dma_alloc_coherent(hw_to_dev(dev->hw), feat_buf.size, + &feat_buf.pa, GFP_KERNEL); + if (!feat_buf.va) + return IRDMA_ERR_NO_MEMORY; + + ret_code = irdma_sc_query_rdma_features(dev->cqp, &feat_buf, 0); + if (!ret_code) + ret_code = irdma_sc_query_rdma_features_done(dev->cqp); + if (ret_code) + goto exit; + + get_64bit_val(feat_buf.va, 0, &temp); + feat_cnt = (u16)RS_64(temp, IRDMA_FEATURE_CNT); + if (feat_cnt < 2) { + ret_code = IRDMA_ERR_INVALID_FEAT_CNT; + goto exit; + } else if (feat_cnt > IRDMA_MAX_FEATURES) { + dev_dbg(rfdev_to_dev(dev), + "DEV: feature buf size insufficient, retrying with larger buffer\n"); + dma_free_coherent(hw_to_dev(dev->hw), feat_buf.size, + feat_buf.va, feat_buf.pa); + feat_buf.va = NULL; + feat_buf.size = ALIGN(8 * feat_cnt, + IRDMA_FEATURE_BUF_ALIGNMENT); + feat_buf.va = dma_alloc_coherent(hw_to_dev(dev->hw), + feat_buf.size, &feat_buf.pa, + GFP_KERNEL); + if (!feat_buf.va) + return IRDMA_ERR_NO_MEMORY; + + ret_code = irdma_sc_query_rdma_features(dev->cqp, &feat_buf, 0); + if (!ret_code) + ret_code = irdma_sc_query_rdma_features_done(dev->cqp); + if (ret_code) + goto exit; + + get_64bit_val(feat_buf.va, 0, &temp); + feat_cnt = (u16)RS_64(temp, IRDMA_FEATURE_CNT); + if (feat_cnt < 2) { + ret_code = IRDMA_ERR_INVALID_FEAT_CNT; + goto exit; + } + } + + irdma_debug_buf(dev, IRDMA_DEBUG_WQE, "QUERY RDMA FEATURES", feat_buf.va, + feat_cnt * 8); + + for (byte_idx = 0, feat_idx = 0; feat_idx < min_t(u16, feat_cnt, IRDMA_MAX_FEATURES); + feat_idx++, byte_idx += 8) { + get_64bit_val(feat_buf.va, byte_idx, &temp); + feat_type = RS_64(temp, IRDMA_FEATURE_TYPE); + if (feat_type >= IRDMA_MAX_FEATURES) { + dev_dbg(rfdev_to_dev(dev), + "DEV: found unrecognized feature type %d\n", + feat_type); + continue; + } + dev->feature_info[feat_type] = temp; + } +exit: + dma_free_coherent(hw_to_dev(dev->hw), feat_buf.size, feat_buf.va, + feat_buf.pa); + feat_buf.va = NULL; + return ret_code; +} + +static void cfg_fpm_value_gen_1(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info, u32 qpwanted) +{ + u32 powerof2 = 1; + + while (powerof2 < dev->hw_attrs.max_hw_wqes) + powerof2 *= 2; + hmc_info->hmc_obj[IRDMA_HMC_IW_XF].cnt = powerof2 * qpwanted; + + powerof2 = 1; + while (powerof2 < dev->hw_attrs.max_hw_ird) + powerof2 *= 2; + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt = powerof2 * qpwanted * 2; +} + +static void cfg_fpm_value_gen_2(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info, u32 qpwanted) +{ + struct irdma_hmc_fpm_misc *hmc_fpm_misc = &dev->hmc_fpm_misc; + + hmc_info->hmc_obj[IRDMA_HMC_IW_XF].cnt = + 2 * hmc_fpm_misc->xf_block_size * qpwanted; + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt = 2 * 16 * qpwanted; + hmc_info->hmc_obj[IRDMA_HMC_IW_HDR].cnt = qpwanted; + + if (hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].max_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].cnt = 32 * qpwanted; + if (hmc_info->hmc_obj[IRDMA_HMC_IW_RRFFL].max_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_RRFFL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].cnt / + hmc_fpm_misc->rrf_block_size; + if (hmc_info->hmc_obj[IRDMA_HMC_IW_OOISC].max_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_OOISC].cnt = 32 * qpwanted; + if (hmc_info->hmc_obj[IRDMA_HMC_IW_OOISCFFL].max_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_OOISCFFL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_OOISC].cnt / + hmc_fpm_misc->ooiscf_block_size; +} + +/** + * irdma_config_fpm_values - configure HMC objects + * @dev: sc device struct + * @qp_count: desired qp count + */ +enum irdma_status_code irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) +{ + struct irdma_virt_mem virt_mem; + u32 i, mem_size; + u32 qpwanted, mrwanted, pblewanted; + u32 powerof2, hte; + u32 sd_needed; + u32 sd_diff; + u32 loop_count = 0; + struct irdma_hmc_info *hmc_info; + struct irdma_hmc_fpm_misc *hmc_fpm_misc; + enum irdma_status_code ret_code = 0; + + hmc_info = dev->hmc_info; + hmc_fpm_misc = &dev->hmc_fpm_misc; + + ret_code = irdma_sc_init_iw_hmc(dev, dev->hmc_fn_id); + if (ret_code) { + dev_dbg(rfdev_to_dev(dev), + "HMC: irdma_sc_init_iw_hmc returned error_code = %d\n", + ret_code); + return ret_code; + } + + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) + hmc_info->hmc_obj[i].cnt = hmc_info->hmc_obj[i].max_cnt; + sd_needed = irdma_est_sd(dev, hmc_info); + dev_dbg(rfdev_to_dev(dev), + "HMC: FW max resources sd_needed[%08d] first_sd_index[%04d]\n", + sd_needed, hmc_info->first_sd_index); + dev_dbg(rfdev_to_dev(dev), "HMC: sd count %d where max sd is %d\n", + hmc_info->sd_table.sd_cnt, hmc_fpm_misc->max_sds); + + qpwanted = min(qp_count, hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt); + + powerof2 = 1; + while (powerof2 <= qpwanted) + powerof2 *= 2; + powerof2 /= 2; + qpwanted = powerof2; + + mrwanted = hmc_info->hmc_obj[IRDMA_HMC_IW_MR].max_cnt; + pblewanted = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].max_cnt; + + dev_dbg(rfdev_to_dev(dev), + "HMC: req_qp=%d max_sd=%d, max_qp = %d, max_cq=%d, max_mr=%d, max_pble=%d, mc=%d, av=%d\n", + qp_count, hmc_fpm_misc->max_sds, + hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].max_cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].max_cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].max_cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt); + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].max_cnt; + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt; + hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].max_cnt; + + hmc_info->hmc_obj[IRDMA_HMC_IW_APBVT_ENTRY].cnt = 1; + + do { + ++loop_count; + hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt = qpwanted; + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt = + min(2 * qpwanted, hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt); + hmc_info->hmc_obj[IRDMA_HMC_IW_RESERVED].cnt = 0; /* Reserved */ + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt = mrwanted; + + hte = round_up(qpwanted + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt, 512); + powerof2 = 1; + while (powerof2 < hte) + powerof2 *= 2; + hmc_info->hmc_obj[IRDMA_HMC_IW_HTE].cnt = + powerof2 * hmc_fpm_misc->ht_multiplier; + if (dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) + cfg_fpm_value_gen_1(dev, hmc_info, qpwanted); + else + cfg_fpm_value_gen_2(dev, hmc_info, qpwanted); + + hmc_info->hmc_obj[IRDMA_HMC_IW_XFFL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_XF].cnt / hmc_fpm_misc->xf_block_size; + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1FL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt / hmc_fpm_misc->q1_block_size; + hmc_info->hmc_obj[IRDMA_HMC_IW_TIMER].cnt = + (round_up(qpwanted, 512) / 512 + 1) * hmc_fpm_misc->timer_bucket; + + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt = pblewanted; + sd_needed = irdma_est_sd(dev, hmc_info); + dev_dbg(rfdev_to_dev(dev), + "HMC: sd_needed = %d, hmc_fpm_misc->max_sds=%d, mrwanted=%d, pblewanted=%d qpwanted=%d\n", + sd_needed, hmc_fpm_misc->max_sds, mrwanted, + pblewanted, qpwanted); + + /* Do not reduce resources further. All objects fit with max SDs */ + if (sd_needed <= hmc_fpm_misc->max_sds) + break; + + sd_diff = sd_needed - hmc_fpm_misc->max_sds; + if (sd_diff > 128) { + if (qpwanted > 128) + qpwanted /= 2; + mrwanted /= 2; + pblewanted /= 2; + continue; + } + if (dev->cqp->hmc_profile != IRDMA_HMC_PROFILE_FAVOR_VF && + pblewanted > (512 * FPM_MULTIPLIER * sd_diff)) { + pblewanted -= 256 * FPM_MULTIPLIER * sd_diff; + continue; + } else if (pblewanted > (100 * FPM_MULTIPLIER)) { + pblewanted -= 10 * FPM_MULTIPLIER; + } else if (pblewanted > FPM_MULTIPLIER) { + pblewanted -= FPM_MULTIPLIER; + } else if (qpwanted <= 128) { + if (hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt > 256) + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt /= 2; + if (hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt > 256) + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt /= 2; + } + if (mrwanted > FPM_MULTIPLIER) + mrwanted -= FPM_MULTIPLIER; + if (!(loop_count % 10) && qpwanted > 128) { + qpwanted /= 2; + if (hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt > 256) + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt /= 2; + } + } while (loop_count < 2000); + + if (sd_needed > hmc_fpm_misc->max_sds) { + dev_dbg(rfdev_to_dev(dev), + "HMC: cfg_fpm failed loop_cnt=%d, sd_needed=%d, max sd count %d\n", + loop_count, sd_needed, hmc_info->sd_table.sd_cnt); + return IRDMA_ERR_CFG; + } + + if (loop_count > 1 && sd_needed < hmc_fpm_misc->max_sds) { + pblewanted += (hmc_fpm_misc->max_sds - sd_needed) * 256 * + FPM_MULTIPLIER; + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt = pblewanted; + sd_needed = irdma_est_sd(dev, hmc_info); + } + + dev_dbg(rfdev_to_dev(dev), + "HMC: loop_cnt=%d, sd_needed=%d, qpcnt = %d, cqcnt=%d, mrcnt=%d, pblecnt=%d, mc=%d, ah=%d, max sd count %d, first sd index %d\n", + loop_count, sd_needed, hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt, + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt, + hmc_info->sd_table.sd_cnt, hmc_info->first_sd_index); + + ret_code = irdma_sc_cfg_iw_fpm(dev, dev->hmc_fn_id); + if (ret_code) { + dev_dbg(rfdev_to_dev(dev), + "HMC: cfg_iw_fpm returned error_code[x%08X]\n", + readl(dev->hw_regs[IRDMA_CQPERRCODES])); + return ret_code; + } + + mem_size = sizeof(struct irdma_hmc_sd_entry) * + (hmc_info->sd_table.sd_cnt + hmc_info->first_sd_index + 1); + virt_mem.size = mem_size; + virt_mem.va = kzalloc(virt_mem.size, GFP_ATOMIC); + if (!virt_mem.va) { + dev_dbg(rfdev_to_dev(dev), + "HMC: failed to allocate memory for sd_entry buffer\n"); + return IRDMA_ERR_NO_MEMORY; + } + hmc_info->sd_table.sd_entry = virt_mem.va; + + return ret_code; +} + +/** + * irdma_exec_cqp_cmd - execute cqp cmd when wqe are available + * @dev: rdma device + * @pcmdinfo: cqp command info + */ +static enum irdma_status_code irdma_exec_cqp_cmd(struct irdma_sc_dev *dev, + struct cqp_cmds_info *pcmdinfo) +{ + enum irdma_status_code status; + struct irdma_dma_mem val_mem; + bool alloc = false; + + dev->cqp_cmd_stats[pcmdinfo->cqp_cmd]++; + switch (pcmdinfo->cqp_cmd) { + case IRDMA_OP_CEQ_DESTROY: + status = irdma_sc_ceq_destroy(pcmdinfo->in.u.ceq_destroy.ceq, + pcmdinfo->in.u.ceq_destroy.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_AEQ_DESTROY: + status = irdma_sc_aeq_destroy(pcmdinfo->in.u.aeq_destroy.aeq, + pcmdinfo->in.u.aeq_destroy.scratch, + pcmdinfo->post_sq); + + break; + case IRDMA_OP_CEQ_CREATE: + status = irdma_sc_ceq_create(pcmdinfo->in.u.ceq_create.ceq, + pcmdinfo->in.u.ceq_create.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_AEQ_CREATE: + status = irdma_sc_aeq_create(pcmdinfo->in.u.aeq_create.aeq, + pcmdinfo->in.u.aeq_create.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_QP_UPLOAD_CONTEXT: + status = irdma_sc_qp_upload_context(pcmdinfo->in.u.qp_upload_context.dev, + &pcmdinfo->in.u.qp_upload_context.info, + pcmdinfo->in.u.qp_upload_context.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_CQ_CREATE: + status = irdma_sc_cq_create(pcmdinfo->in.u.cq_create.cq, + pcmdinfo->in.u.cq_create.scratch, + pcmdinfo->in.u.cq_create.check_overflow, + pcmdinfo->post_sq); + break; + case IRDMA_OP_CQ_MODIFY: + status = irdma_sc_cq_modify(pcmdinfo->in.u.cq_modify.cq, + &pcmdinfo->in.u.cq_modify.info, + pcmdinfo->in.u.cq_modify.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_CQ_DESTROY: + status = irdma_sc_cq_destroy(pcmdinfo->in.u.cq_destroy.cq, + pcmdinfo->in.u.cq_destroy.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_QP_FLUSH_WQES: + status = irdma_sc_qp_flush_wqes(pcmdinfo->in.u.qp_flush_wqes.qp, + &pcmdinfo->in.u.qp_flush_wqes.info, + pcmdinfo->in.u.qp_flush_wqes.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_GEN_AE: + status = irdma_sc_gen_ae(pcmdinfo->in.u.gen_ae.qp, + &pcmdinfo->in.u.gen_ae.info, + pcmdinfo->in.u.gen_ae.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_MANAGE_PUSH_PAGE: + status = irdma_sc_manage_push_page(pcmdinfo->in.u.manage_push_page.cqp, + &pcmdinfo->in.u.manage_push_page.info, + pcmdinfo->in.u.manage_push_page.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_UPDATE_PE_SDS: + status = irdma_update_pe_sds(pcmdinfo->in.u.update_pe_sds.dev, + &pcmdinfo->in.u.update_pe_sds.info, + pcmdinfo->in.u.update_pe_sds.scratch); + break; + case IRDMA_OP_MANAGE_HMC_PM_FUNC_TABLE: + /* switch to calling through the call table */ + status = + irdma_sc_manage_hmc_pm_func_table(pcmdinfo->in.u.manage_hmc_pm.dev->cqp, + &pcmdinfo->in.u.manage_hmc_pm.info, + pcmdinfo->in.u.manage_hmc_pm.scratch, + true); + break; + case IRDMA_OP_SUSPEND: + status = irdma_sc_suspend_qp(pcmdinfo->in.u.suspend_resume.cqp, + pcmdinfo->in.u.suspend_resume.qp, + pcmdinfo->in.u.suspend_resume.scratch); + break; + case IRDMA_OP_RESUME: + status = irdma_sc_resume_qp(pcmdinfo->in.u.suspend_resume.cqp, + pcmdinfo->in.u.suspend_resume.qp, + pcmdinfo->in.u.suspend_resume.scratch); + break; + case IRDMA_OP_QUERY_FPM_VAL: + val_mem.pa = pcmdinfo->in.u.query_fpm_val.fpm_val_pa; + val_mem.va = pcmdinfo->in.u.query_fpm_val.fpm_val_va; + status = irdma_sc_query_fpm_val(pcmdinfo->in.u.query_fpm_val.cqp, + pcmdinfo->in.u.query_fpm_val.scratch, + pcmdinfo->in.u.query_fpm_val.hmc_fn_id, + &val_mem, true, IRDMA_CQP_WAIT_EVENT); + break; + case IRDMA_OP_COMMIT_FPM_VAL: + val_mem.pa = pcmdinfo->in.u.commit_fpm_val.fpm_val_pa; + val_mem.va = pcmdinfo->in.u.commit_fpm_val.fpm_val_va; + status = irdma_sc_commit_fpm_val(pcmdinfo->in.u.commit_fpm_val.cqp, + pcmdinfo->in.u.commit_fpm_val.scratch, + pcmdinfo->in.u.commit_fpm_val.hmc_fn_id, + &val_mem, + true, + IRDMA_CQP_WAIT_EVENT); + break; + case IRDMA_OP_STATS_ALLOCATE: + alloc = true; + /* fall-through */ + case IRDMA_OP_STATS_FREE: + status = irdma_sc_manage_stats_inst(pcmdinfo->in.u.stats_manage.cqp, + &pcmdinfo->in.u.stats_manage.info, + alloc, + pcmdinfo->in.u.stats_manage.scratch); + break; + case IRDMA_OP_STATS_GATHER: + status = irdma_sc_gather_stats(pcmdinfo->in.u.stats_gather.cqp, + &pcmdinfo->in.u.stats_gather.info, + pcmdinfo->in.u.stats_gather.scratch); + break; + case IRDMA_OP_WS_MODIFY_NODE: + status = irdma_sc_manage_ws_node(pcmdinfo->in.u.ws_node.cqp, + &pcmdinfo->in.u.ws_node.info, + IRDMA_MODIFY_NODE, + pcmdinfo->in.u.ws_node.scratch); + break; + case IRDMA_OP_WS_DELETE_NODE: + status = irdma_sc_manage_ws_node(pcmdinfo->in.u.ws_node.cqp, + &pcmdinfo->in.u.ws_node.info, + IRDMA_DEL_NODE, + pcmdinfo->in.u.ws_node.scratch); + break; + case IRDMA_OP_WS_ADD_NODE: + status = irdma_sc_manage_ws_node(pcmdinfo->in.u.ws_node.cqp, + &pcmdinfo->in.u.ws_node.info, + IRDMA_ADD_NODE, + pcmdinfo->in.u.ws_node.scratch); + break; + case IRDMA_OP_SET_UP_MAP: + status = irdma_sc_set_up_map(pcmdinfo->in.u.up_map.cqp, + &pcmdinfo->in.u.up_map.info, + pcmdinfo->in.u.up_map.scratch); + break; + case IRDMA_OP_QUERY_RDMA_FEATURES: + status = irdma_sc_query_rdma_features(pcmdinfo->in.u.query_rdma.cqp, + &pcmdinfo->in.u.query_rdma.query_buff_mem, + pcmdinfo->in.u.query_rdma.scratch); + break; + case IRDMA_OP_DELETE_ARP_CACHE_ENTRY: + status = irdma_sc_del_arp_cache_entry(pcmdinfo->in.u.del_arp_cache_entry.cqp, + pcmdinfo->in.u.del_arp_cache_entry.scratch, + pcmdinfo->in.u.del_arp_cache_entry.arp_index, + pcmdinfo->post_sq); + break; + case IRDMA_OP_MANAGE_APBVT_ENTRY: + status = irdma_sc_manage_apbvt_entry(pcmdinfo->in.u.manage_apbvt_entry.cqp, + &pcmdinfo->in.u.manage_apbvt_entry.info, + pcmdinfo->in.u.manage_apbvt_entry.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_MANAGE_QHASH_TABLE_ENTRY: + status = irdma_sc_manage_qhash_table_entry(pcmdinfo->in.u.manage_qhash_table_entry.cqp, + &pcmdinfo->in.u.manage_qhash_table_entry.info, + pcmdinfo->in.u.manage_qhash_table_entry.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_QP_MODIFY: + status = irdma_sc_qp_modify(pcmdinfo->in.u.qp_modify.qp, + &pcmdinfo->in.u.qp_modify.info, + pcmdinfo->in.u.qp_modify.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_QP_CREATE: + status = irdma_sc_qp_create(pcmdinfo->in.u.qp_create.qp, + &pcmdinfo->in.u.qp_create.info, + pcmdinfo->in.u.qp_create.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_QP_DESTROY: + status = irdma_sc_qp_destroy(pcmdinfo->in.u.qp_destroy.qp, + pcmdinfo->in.u.qp_destroy.scratch, + pcmdinfo->in.u.qp_destroy.remove_hash_idx, + pcmdinfo->in.u.qp_destroy.ignore_mw_bnd, + pcmdinfo->post_sq); + break; + case IRDMA_OP_ALLOC_STAG: + status = irdma_sc_alloc_stag(pcmdinfo->in.u.alloc_stag.dev, + &pcmdinfo->in.u.alloc_stag.info, + pcmdinfo->in.u.alloc_stag.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_MR_REG_NON_SHARED: + status = irdma_sc_mr_reg_non_shared(pcmdinfo->in.u.mr_reg_non_shared.dev, + &pcmdinfo->in.u.mr_reg_non_shared.info, + pcmdinfo->in.u.mr_reg_non_shared.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_DEALLOC_STAG: + status = + irdma_sc_dealloc_stag(pcmdinfo->in.u.dealloc_stag.dev, + &pcmdinfo->in.u.dealloc_stag.info, + pcmdinfo->in.u.dealloc_stag.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_MW_ALLOC: + status = irdma_sc_mw_alloc(pcmdinfo->in.u.mw_alloc.dev, + &pcmdinfo->in.u.mw_alloc.info, + pcmdinfo->in.u.mw_alloc.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_ADD_ARP_CACHE_ENTRY: + status = irdma_sc_add_arp_cache_entry(pcmdinfo->in.u.add_arp_cache_entry.cqp, + &pcmdinfo->in.u.add_arp_cache_entry.info, + pcmdinfo->in.u.add_arp_cache_entry.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_ALLOC_LOCAL_MAC_ENTRY: + status = dev->cqp_misc_ops->alloc_local_mac_entry(pcmdinfo->in.u.alloc_local_mac_entry.cqp, + pcmdinfo->in.u.alloc_local_mac_entry.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_ADD_LOCAL_MAC_ENTRY: + status = dev->cqp_misc_ops->add_local_mac_entry(pcmdinfo->in.u.add_local_mac_entry.cqp, + &pcmdinfo->in.u.add_local_mac_entry.info, + pcmdinfo->in.u.add_local_mac_entry.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_DELETE_LOCAL_MAC_ENTRY: + status = dev->cqp_misc_ops->del_local_mac_entry(pcmdinfo->in.u.del_local_mac_entry.cqp, + pcmdinfo->in.u.del_local_mac_entry.scratch, + pcmdinfo->in.u.del_local_mac_entry.entry_idx, + pcmdinfo->in.u.del_local_mac_entry.ignore_ref_count, + pcmdinfo->post_sq); + break; + case IRDMA_OP_AH_CREATE: + status = dev->iw_uda_ops->create_ah(pcmdinfo->in.u.ah_create.cqp, + &pcmdinfo->in.u.ah_create.info, + pcmdinfo->in.u.ah_create.scratch); + break; + case IRDMA_OP_AH_DESTROY: + status = dev->iw_uda_ops->destroy_ah(pcmdinfo->in.u.ah_destroy.cqp, + &pcmdinfo->in.u.ah_destroy.info, + pcmdinfo->in.u.ah_destroy.scratch); + break; + case IRDMA_OP_MC_CREATE: + status = dev->iw_uda_ops->mcast_grp_create(pcmdinfo->in.u.mc_create.cqp, + &pcmdinfo->in.u.mc_create.info, + pcmdinfo->in.u.mc_create.scratch); + break; + case IRDMA_OP_MC_DESTROY: + status = dev->iw_uda_ops->mcast_grp_destroy(pcmdinfo->in.u.mc_destroy.cqp, + &pcmdinfo->in.u.mc_destroy.info, + pcmdinfo->in.u.mc_destroy.scratch); + break; + case IRDMA_OP_MC_MODIFY: + status = dev->iw_uda_ops->mcast_grp_modify(pcmdinfo->in.u.mc_modify.cqp, + &pcmdinfo->in.u.mc_modify.info, + pcmdinfo->in.u.mc_modify.scratch); + break; + default: + status = IRDMA_NOT_SUPPORTED; + break; + } + + return status; +} + +/** + * irdma_process_cqp_cmd - process all cqp commands + * @dev: sc device struct + * @pcmdinfo: cqp command info + */ +enum irdma_status_code irdma_process_cqp_cmd(struct irdma_sc_dev *dev, + struct cqp_cmds_info *pcmdinfo) +{ + enum irdma_status_code status = 0; + unsigned long flags; + + spin_lock_irqsave(&dev->cqp_lock, flags); + if (list_empty(&dev->cqp_cmd_head) && !irdma_cqp_ring_full(dev->cqp)) + status = irdma_exec_cqp_cmd(dev, pcmdinfo); + else + list_add_tail(&pcmdinfo->cqp_cmd_entry, &dev->cqp_cmd_head); + spin_unlock_irqrestore(&dev->cqp_lock, flags); + return status; +} + +/** + * irdma_process_bh - called from tasklet for cqp list + * @dev: sc device struct + */ +enum irdma_status_code irdma_process_bh(struct irdma_sc_dev *dev) +{ + enum irdma_status_code status = 0; + struct cqp_cmds_info *pcmdinfo; + unsigned long flags; + + spin_lock_irqsave(&dev->cqp_lock, flags); + while (!list_empty(&dev->cqp_cmd_head) && + !irdma_cqp_ring_full(dev->cqp)) { + pcmdinfo = (struct cqp_cmds_info *)irdma_remove_cqp_head(dev); + status = irdma_exec_cqp_cmd(dev, pcmdinfo); + if (status) + break; + } + spin_unlock_irqrestore(&dev->cqp_lock, flags); + return status; +} + +/** + * irdma_enable_irq - Enable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void irdma_ena_irq(struct irdma_sc_dev *dev, u32 idx) +{ + u32 val; + + val = IRDMA_GLINT_DYN_CTL_INTENA_M | IRDMA_GLINT_DYN_CTL_CLEARPBA_M | + IRDMA_GLINT_DYN_CTL_ITR_INDX_M; + if (dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) + writel(val, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + idx); + else + writel(val, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + (idx - 1)); +} + +/** + * irdma_disable_irq - Disable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void irdma_disable_irq(struct irdma_sc_dev *dev, u32 idx) +{ + if (dev->hw_attrs.uk_attrs.hw_rev != IRDMA_GEN_1) + writel(0, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + idx); + else + writel(0, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + (idx - 1)); +} + +/** + * irdma_config_ceq- Configure CEQ interrupt + * @dev: pointer to the device structure + * @ceq_id: Completion Event Queue ID + * @idx: vector index + */ +static void irdma_cfg_ceq(struct irdma_sc_dev *dev, u32 ceq_id, u32 idx) +{ + u32 reg_val; + + reg_val = (IRDMA_GLINT_CEQCTL_CAUSE_ENA_M | + (idx << IRDMA_GLINT_CEQCTL_MSIX_INDX_S) | + IRDMA_GLINT_CEQCTL_ITR_INDX_M); + + writel(reg_val, dev->hw_regs[IRDMA_GLINT_CEQCTL] + ceq_id); +} + +/** + * irdma_config_aeq- Configure AEQ interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void irdma_cfg_aeq(struct irdma_sc_dev *dev, u32 idx) +{ + u32 reg_val; + + reg_val = (IRDMA_PFINT_AEQCTL_CAUSE_ENA_M | + (idx << IRDMA_PFINT_AEQCTL_MSIX_INDX_S) | + IRDMA_PFINT_AEQCTL_ITR_INDX_M); + + writel(reg_val, dev->hw_regs[IRDMA_PFINT_AEQCTL]); +} + +/* iwarp pd ops */ +static struct irdma_pd_ops iw_pd_ops = { + .pd_init = irdma_sc_pd_init +}; + +static struct irdma_priv_qp_ops iw_priv_qp_ops = { + .iw_mr_fast_register = irdma_sc_mr_fast_register, + .qp_create = irdma_sc_qp_create, + .qp_destroy = irdma_sc_qp_destroy, + .qp_flush_wqes = irdma_sc_qp_flush_wqes, + .qp_init = irdma_sc_qp_init, + .qp_modify = irdma_sc_qp_modify, + .qp_send_lsmm = irdma_sc_send_lsmm, + .qp_send_lsmm_nostag = irdma_sc_send_lsmm_nostag, + .qp_send_rtt = irdma_sc_send_rtt, + .qp_setctx = irdma_sc_qp_setctx, + .qp_setctx_roce = irdma_sc_qp_setctx_roce, + .qp_upload_context = irdma_sc_qp_upload_context, + .update_resume_qp = irdma_sc_resume_qp, + .update_suspend_qp = irdma_sc_suspend_qp, +}; + +static struct irdma_mr_ops iw_mr_ops = { + .alloc_stag = irdma_sc_alloc_stag, + .dealloc_stag = irdma_sc_dealloc_stag, + .mr_reg_non_shared = irdma_sc_mr_reg_non_shared, + .mr_reg_shared = irdma_sc_mr_reg_shared, + .mw_alloc = irdma_sc_mw_alloc, + .query_stag = irdma_sc_query_stag, +}; + +static struct irdma_cqp_misc_ops iw_cqp_misc_ops = { + .add_arp_cache_entry = irdma_sc_add_arp_cache_entry, + .add_local_mac_entry = irdma_sc_add_local_mac_entry, + .alloc_local_mac_entry = irdma_sc_alloc_local_mac_entry, + .cqp_nop = irdma_sc_cqp_nop, + .del_arp_cache_entry = irdma_sc_del_arp_cache_entry, + .del_local_mac_entry = irdma_sc_del_local_mac_entry, + .gather_stats = irdma_sc_gather_stats, + .manage_apbvt_entry = irdma_sc_manage_apbvt_entry, + .manage_push_page = irdma_sc_manage_push_page, + .manage_qhash_table_entry = irdma_sc_manage_qhash_table_entry, + .manage_stats_instance = irdma_sc_manage_stats_inst, + .manage_ws_node = irdma_sc_manage_ws_node, + .query_arp_cache_entry = irdma_sc_query_arp_cache_entry, + .query_rdma_features = irdma_sc_query_rdma_features, + .set_up_map = irdma_sc_set_up_map, +}; + +static struct irdma_irq_ops iw_irq_ops = { + .irdma_cfg_aeq = irdma_cfg_aeq, + .irdma_cfg_ceq = irdma_cfg_ceq, + .irdma_dis_irq = irdma_disable_irq, + .irdma_en_irq = irdma_ena_irq, +}; + +static struct irdma_cqp_ops iw_cqp_ops = { + .check_cqp_progress = irdma_check_cqp_progress, + .cqp_create = irdma_sc_cqp_create, + .cqp_destroy = irdma_sc_cqp_destroy, + .cqp_get_next_send_wqe = irdma_sc_cqp_get_next_send_wqe, + .cqp_init = irdma_sc_cqp_init, + .cqp_post_sq = irdma_sc_cqp_post_sq, + .poll_for_cqp_op_done = irdma_sc_poll_for_cqp_op_done, +}; + +static struct irdma_priv_cq_ops iw_priv_cq_ops = { + .cq_ack = irdma_sc_cq_ack, + .cq_create = irdma_sc_cq_create, + .cq_destroy = irdma_sc_cq_destroy, + .cq_init = irdma_sc_cq_init, + .cq_modify = irdma_sc_cq_modify, + .cq_resize = irdma_sc_cq_resize, +}; + +static struct irdma_ccq_ops iw_ccq_ops = { + .ccq_arm = irdma_sc_ccq_arm, + .ccq_create = irdma_sc_ccq_create, + .ccq_create_done = irdma_sc_ccq_create_done, + .ccq_destroy = irdma_sc_ccq_destroy, + .ccq_get_cqe_info = irdma_sc_ccq_get_cqe_info, + .ccq_init = irdma_sc_ccq_init, +}; + +static struct irdma_ceq_ops iw_ceq_ops = { + .cceq_create = irdma_sc_cceq_create, + .cceq_create_done = irdma_sc_cceq_create_done, + .cceq_destroy_done = irdma_sc_cceq_destroy_done, + .ceq_create = irdma_sc_ceq_create, + .ceq_destroy = irdma_sc_ceq_destroy, + .ceq_init = irdma_sc_ceq_init, + .process_ceq = irdma_sc_process_ceq, +}; + +static struct irdma_aeq_ops iw_aeq_ops = { + .aeq_create = irdma_sc_aeq_create, + .aeq_create_done = irdma_sc_aeq_create_done, + .aeq_destroy = irdma_sc_aeq_destroy, + .aeq_destroy_done = irdma_sc_aeq_destroy_done, + .aeq_init = irdma_sc_aeq_init, + .get_next_aeqe = irdma_sc_get_next_aeqe, + .repost_aeq_entries = irdma_sc_repost_aeq_entries, +}; + +static struct irdma_hmc_ops iw_hmc_ops = { + .cfg_iw_fpm = irdma_sc_cfg_iw_fpm, + .commit_fpm_val = irdma_sc_commit_fpm_val, + .commit_fpm_val_done = irdma_sc_commit_fpm_val_done, + .create_hmc_object = irdma_sc_create_hmc_obj, + .del_hmc_object = irdma_sc_del_hmc_obj, + .init_iw_hmc = irdma_sc_init_iw_hmc, + .manage_hmc_pm_func_table = irdma_sc_manage_hmc_pm_func_table, + .manage_hmc_pm_func_table_done = irdma_sc_manage_hmc_pm_func_table_done, + .parse_fpm_commit_buf = irdma_sc_parse_fpm_commit_buf, + .parse_fpm_query_buf = irdma_sc_parse_fpm_query_buf, + .pf_init_vfhmc = NULL, + .query_fpm_val = irdma_sc_query_fpm_val, + .query_fpm_val_done = irdma_sc_query_fpm_val_done, + .static_hmc_pages_allocated = irdma_sc_static_hmc_pages_allocated, + .vf_cfg_vffpm = NULL, +}; + +/** + * irdma_wait_pe_ready - Check if firmware is ready + * @dev: provides access to registers + */ +static int irdma_wait_pe_ready(struct irdma_sc_dev *dev) +{ + u32 statuscpu0; + u32 statuscpu1; + u32 statuscpu2; + u32 retrycount = 0; + + do { + statuscpu0 = readl(dev->hw_regs[IRDMA_GLPE_CPUSTATUS0]); + statuscpu1 = readl(dev->hw_regs[IRDMA_GLPE_CPUSTATUS1]); + statuscpu2 = readl(dev->hw_regs[IRDMA_GLPE_CPUSTATUS2]); + if (statuscpu0 == 0x80 && statuscpu1 == 0x80 && + statuscpu2 == 0x80) + return 0; + mdelay(1000); + } while (retrycount++ < dev->hw_attrs.max_pe_ready_count); + return -1; +} + +/** + * irdma_sc_ctrl_init - Initialize control part of device + * @ver: version + * @dev: Device pointer + * @info: Device init info + */ +enum irdma_status_code irdma_sc_ctrl_init(enum irdma_vers ver, + struct irdma_sc_dev *dev, + struct irdma_device_init_info *info) +{ + u32 val; + u16 hmc_fcn = 0; + enum irdma_status_code ret_code = 0; + u8 db_size; + + spin_lock_init(&dev->cqp_lock); + INIT_LIST_HEAD(&dev->cqp_cmd_head); /* for CQP command backlog */ + dev->hmc_fn_id = info->hmc_fn_id; + dev->privileged = info->privileged; + dev->fpm_query_buf_pa = info->fpm_query_buf_pa; + dev->fpm_query_buf = info->fpm_query_buf; + dev->fpm_commit_buf_pa = info->fpm_commit_buf_pa; + dev->fpm_commit_buf = info->fpm_commit_buf; + dev->hw = info->hw; + dev->hw->hw_addr = info->bar0; + dev->irq_ops = &iw_irq_ops; + dev->cqp_ops = &iw_cqp_ops; + dev->ccq_ops = &iw_ccq_ops; + dev->ceq_ops = &iw_ceq_ops; + dev->aeq_ops = &iw_aeq_ops; + dev->hmc_ops = &iw_hmc_ops; + dev->iw_priv_cq_ops = &iw_priv_cq_ops; + + /* Setup the hardware limits, hmc may limit further */ + dev->hw_attrs.min_hw_qp_id = IRDMA_MIN_IW_QP_ID; + dev->hw_attrs.min_hw_aeq_size = IRDMA_MIN_AEQ_ENTRIES; + dev->hw_attrs.max_hw_aeq_size = IRDMA_MAX_AEQ_ENTRIES; + dev->hw_attrs.min_hw_ceq_size = IRDMA_MIN_CEQ_ENTRIES; + dev->hw_attrs.max_hw_ceq_size = IRDMA_MAX_CEQ_ENTRIES; + dev->hw_attrs.uk_attrs.min_hw_cq_size = IRDMA_MIN_CQ_SIZE; + dev->hw_attrs.uk_attrs.max_hw_cq_size = IRDMA_MAX_CQ_SIZE; + dev->hw_attrs.uk_attrs.max_hw_wq_frags = IRDMA_MAX_WQ_FRAGMENT_COUNT; + dev->hw_attrs.uk_attrs.max_hw_read_sges = IRDMA_MAX_SGE_RD; + dev->hw_attrs.max_hw_outbound_msg_size = IRDMA_MAX_OUTBOUND_MSG_SIZE; + dev->hw_attrs.max_mr_size = IRDMA_MAX_MR_SIZE; + dev->hw_attrs.max_hw_inbound_msg_size = IRDMA_MAX_INBOUND_MSG_SIZE; + dev->hw_attrs.max_hw_device_pages = IRDMA_MAX_PUSH_PAGE_COUNT; + dev->hw_attrs.max_hw_vf_fpm_id = IRDMA_MAX_VF_FPM_ID; + dev->hw_attrs.first_hw_vf_fpm_id = IRDMA_FIRST_VF_FPM_ID; + dev->hw_attrs.uk_attrs.max_hw_inline = IRDMA_MAX_INLINE_DATA_SIZE; + dev->hw_attrs.max_hw_ird = IRDMA_MAX_IRD_SIZE; + dev->hw_attrs.max_hw_ord = IRDMA_MAX_ORD_SIZE; + dev->hw_attrs.max_hw_wqes = IRDMA_MAX_WQ_ENTRIES; + dev->hw_attrs.max_qp_wr = IRDMA_MAX_QP_WRS; + + dev->hw_attrs.uk_attrs.max_hw_rq_quanta = IRDMA_QP_SW_MAX_RQ_QUANTA; + dev->hw_attrs.uk_attrs.max_hw_wq_quanta = IRDMA_QP_SW_MAX_WQ_QUANTA; + dev->hw_attrs.max_hw_pds = IRDMA_MAX_PDS; + dev->hw_attrs.max_hw_ena_vf_count = IRDMA_MAX_PE_ENA_VF_COUNT; + + dev->hw_attrs.max_pe_ready_count = 14; + dev->hw_attrs.max_done_count = IRDMA_DONE_COUNT; + dev->hw_attrs.max_sleep_count = IRDMA_SLEEP_COUNT; + dev->hw_attrs.max_cqp_compl_wait_time_ms = CQP_COMPL_WAIT_TIME_MS; + + dev->hw_attrs.uk_attrs.hw_rev = ver; + + info->init_hw(dev); + if (dev->privileged) { + if (irdma_wait_pe_ready(dev)) + return IRDMA_ERR_TIMEOUT; + + val = readl(dev->hw_regs[IRDMA_GLPCI_LBARCTRL]); + db_size = (u8)RS_32(val, IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE); + if (db_size != IRDMA_PE_DB_SIZE_4M && + db_size != IRDMA_PE_DB_SIZE_8M) { + pr_err("RDMA feature not enabled! db_size=%d\n", + db_size); + return IRDMA_ERR_PE_DOORBELL_NOT_ENA; + } + } + dev->hw->hmc.hmc_fn_id = (u8)hmc_fcn; + dev->db_addr = dev->hw->hw_addr + (uintptr_t)dev->hw_regs[IRDMA_DB_ADDR_OFFSET]; + + return ret_code; +} + +/** + * irdma_sc_rt_init - Runtime initialize device + * @dev: IWARP device pointer + */ +void irdma_sc_rt_init(struct irdma_sc_dev *dev) +{ + mutex_init(&dev->ws_mutex); + irdma_device_init_uk(&dev->dev_uk); + dev->cqp_misc_ops = &iw_cqp_misc_ops; + dev->iw_pd_ops = &iw_pd_ops; + dev->iw_priv_qp_ops = &iw_priv_qp_ops; + dev->mr_ops = &iw_mr_ops; + dev->iw_uda_ops = &irdma_uda_ops; +} + +/** + * irdma_update_stats - Update statistics + * @hw_stats: hw_stats instance to update + * @gather_stats: updated stat counters + * @last_gather_stats: last stat counters + */ +void irdma_update_stats(struct irdma_dev_hw_stats *hw_stats, + struct irdma_gather_stats *gather_stats, + struct irdma_gather_stats *last_gather_stats) +{ + u64 *stats_val = hw_stats->stats_val_32; + + stats_val[IRDMA_HW_STAT_INDEX_RXVLANERR] += + IRDMA_STATS_DELTA(gather_stats->rxvlanerr, + last_gather_stats->rxvlanerr, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP4RXDISCARD] += + IRDMA_STATS_DELTA(gather_stats->ip4rxdiscard, + last_gather_stats->ip4rxdiscard, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP4RXTRUNC] += + IRDMA_STATS_DELTA(gather_stats->ip4rxtrunc, + last_gather_stats->ip4rxtrunc, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP4TXNOROUTE] += + IRDMA_STATS_DELTA(gather_stats->ip4txnoroute, + last_gather_stats->ip4txnoroute, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXDISCARD] += + IRDMA_STATS_DELTA(gather_stats->ip6rxdiscard, + last_gather_stats->ip6rxdiscard, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXTRUNC] += + IRDMA_STATS_DELTA(gather_stats->ip6rxtrunc, + last_gather_stats->ip6rxtrunc, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_IP6TXNOROUTE] += + IRDMA_STATS_DELTA(gather_stats->ip6txnoroute, + last_gather_stats->ip6txnoroute, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_TCPRTXSEG] += + IRDMA_STATS_DELTA(gather_stats->tcprtxseg, + last_gather_stats->tcprtxseg, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_TCPRXOPTERR] += + IRDMA_STATS_DELTA(gather_stats->tcprxopterr, + last_gather_stats->tcprxopterr, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_TCPRXPROTOERR] += + IRDMA_STATS_DELTA(gather_stats->tcprxprotoerr, + last_gather_stats->tcprxprotoerr, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_RXRPCNPHANDLED] += + IRDMA_STATS_DELTA(gather_stats->rxrpcnphandled, + last_gather_stats->rxrpcnphandled, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_RXRPCNPIGNORED] += + IRDMA_STATS_DELTA(gather_stats->rxrpcnpignored, + last_gather_stats->rxrpcnpignored, + IRDMA_MAX_STATS_32); + stats_val[IRDMA_HW_STAT_INDEX_TXNPCNPSENT] += + IRDMA_STATS_DELTA(gather_stats->txnpcnpsent, + last_gather_stats->txnpcnpsent, + IRDMA_MAX_STATS_32); + stats_val = hw_stats->stats_val_64; + stats_val[IRDMA_HW_STAT_INDEX_IP4RXOCTS] += + IRDMA_STATS_DELTA(gather_stats->ip4rxocts, + last_gather_stats->ip4rxocts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4RXPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip4rxpkts, + last_gather_stats->ip4rxpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4RXFRAGS] += + IRDMA_STATS_DELTA(gather_stats->ip4txfrag, + last_gather_stats->ip4txfrag, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4RXMCPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip4rxmcpkts, + last_gather_stats->ip4rxmcpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4TXOCTS] += + IRDMA_STATS_DELTA(gather_stats->ip4txocts, + last_gather_stats->ip4txocts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4TXPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip4txpkts, + last_gather_stats->ip4txpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4TXFRAGS] += + IRDMA_STATS_DELTA(gather_stats->ip4txfrag, + last_gather_stats->ip4txfrag, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP4TXMCPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip4txmcpkts, + last_gather_stats->ip4txmcpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXOCTS] += + IRDMA_STATS_DELTA(gather_stats->ip6rxocts, + last_gather_stats->ip6rxocts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip6rxpkts, + last_gather_stats->ip6rxpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXFRAGS] += + IRDMA_STATS_DELTA(gather_stats->ip6txfrags, + last_gather_stats->ip6txfrags, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6RXMCPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip6rxmcpkts, + last_gather_stats->ip6rxmcpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6TXOCTS] += + IRDMA_STATS_DELTA(gather_stats->ip6txocts, + last_gather_stats->ip6txocts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6TXPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip6txpkts, + last_gather_stats->ip6txpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6TXFRAGS] += + IRDMA_STATS_DELTA(gather_stats->ip6txfrags, + last_gather_stats->ip6txfrags, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_IP6TXMCPKTS] += + IRDMA_STATS_DELTA(gather_stats->ip6txmcpkts, + last_gather_stats->ip6txmcpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_TCPRXSEGS] += + IRDMA_STATS_DELTA(gather_stats->tcprxsegs, + last_gather_stats->tcprxsegs, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_TCPTXSEG] += + IRDMA_STATS_DELTA(gather_stats->tcptxsegs, + last_gather_stats->tcptxsegs, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMARXRDS] += + IRDMA_STATS_DELTA(gather_stats->rdmarxrds, + last_gather_stats->rdmarxrds, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMARXSNDS] += + IRDMA_STATS_DELTA(gather_stats->rdmarxsnds, + last_gather_stats->rdmarxsnds, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMARXWRS] += + IRDMA_STATS_DELTA(gather_stats->rdmarxwrs, + last_gather_stats->rdmarxwrs, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMATXRDS] += + IRDMA_STATS_DELTA(gather_stats->rdmatxrds, + last_gather_stats->rdmatxrds, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMATXSNDS] += + IRDMA_STATS_DELTA(gather_stats->rdmatxsnds, + last_gather_stats->rdmatxsnds, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMATXWRS] += + IRDMA_STATS_DELTA(gather_stats->rdmatxwrs, + last_gather_stats->rdmatxwrs, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMAVBND] += + IRDMA_STATS_DELTA(gather_stats->rdmavbn, + last_gather_stats->rdmavbn, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RDMAVINV] += + IRDMA_STATS_DELTA(gather_stats->rdmavinv, + last_gather_stats->rdmavinv, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_UDPRXPKTS] += + IRDMA_STATS_DELTA(gather_stats->udprxpkts, + last_gather_stats->udprxpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_UDPTXPKTS] += + IRDMA_STATS_DELTA(gather_stats->udptxpkts, + last_gather_stats->udptxpkts, + IRDMA_MAX_STATS_48); + stats_val[IRDMA_HW_STAT_INDEX_RXNPECNMARKEDPKTS] += + IRDMA_STATS_DELTA(gather_stats->rxnpecnmrkpkts, + last_gather_stats->rxnpecnmrkpkts, + IRDMA_MAX_STATS_48); + memcpy(last_gather_stats, gather_stats, sizeof(*last_gather_stats)); +} diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h new file mode 100644 index 000000000000..156d27be612e --- /dev/null +++ b/drivers/infiniband/hw/irdma/defs.h @@ -0,0 +1,2132 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_DEFS_H +#define IRDMA_DEFS_H + +#define IRDMA_FIRST_USER_QP_ID 3 + +#define ECN_CODE_PT_VAL 2 + +#define IRDMA_PUSH_OFFSET (8 * 1024 * 1024) +#define IRDMA_PF_FIRST_PUSH_PAGE_INDEX 16 +#define IRDMA_PF_BAR_RSVD (60 * 1024) +#define IRDMA_VF_PUSH_OFFSET ((8 + 64) * 1024) +#define IRDMA_VF_FIRST_PUSH_PAGE_INDEX 2 +#define IRDMA_VF_BAR_RSVD 4096 +#define IRDMA_VF_STATS_SIZE_V0 280 + +#define IRDMA_PE_DB_SIZE_4M 1 +#define IRDMA_PE_DB_SIZE_8M 2 + +enum irdma_protocol_used { + IRDMA_ANY_PROTOCOL = 0, + IRDMA_IWARP_PROTOCOL_ONLY = 1, + IRDMA_ROCE_PROTOCOL_ONLY = 2, +}; + +#define IRDMA_QP_STATE_INVALID 0 +#define IRDMA_QP_STATE_IDLE 1 +#define IRDMA_QP_STATE_RTS 2 +#define IRDMA_QP_STATE_CLOSING 3 +#define IRDMA_QP_STATE_RTR 4 +#define IRDMA_QP_STATE_TERMINATE 5 +#define IRDMA_QP_STATE_ERROR 6 + +#define IRDMA_MAX_USER_PRIORITY 8 +#define IRDMA_MAX_APPS 8 +#define IRDMA_MAX_STATS_COUNT 128 +#define IRDMA_FIRST_NON_PF_STAT 4 + +#define IRDMA_MIN_MTU_IPV4 576 +#define IRDMA_MIN_MTU_IPV6 1280 +#define IRDMA_MTU_TO_MSS_IPV4 40 +#define IRDMA_MTU_TO_MSS_IPV6 60 +#define IRDMA_DEFAULT_MTU 1500 + +#define IRDMA_MAX_ENCODED_IRD_SIZE 4 + +#define Q2_FPSN_OFFSET 64 +#define TERM_DDP_LEN_TAGGED 14 +#define TERM_DDP_LEN_UNTAGGED 18 +#define TERM_RDMA_LEN 28 +#define RDMA_OPCODE_M 0x0f +#define RDMA_READ_REQ_OPCODE 1 +#define Q2_BAD_FRAME_OFFSET 72 +#define CQE_MAJOR_DRV 0x8000 + +#define IRDMA_TERM_SENT 1 +#define IRDMA_TERM_RCVD 2 +#define IRDMA_TERM_DONE 4 +#define IRDMA_MAC_HLEN 14 +#define IRDMA_CQP_WAIT_POLL_REGS 1 +#define IRDMA_CQP_WAIT_POLL_CQ 2 +#define IRDMA_CQP_WAIT_EVENT 3 + +#define IRDMA_AE_SOURCE_RSVD 0x0 +#define IRDMA_AE_SOURCE_RQ 0x1 +#define IRDMA_AE_SOURCE_RQ_0011 0x3 + +#define IRDMA_AE_SOURCE_CQ 0x2 +#define IRDMA_AE_SOURCE_CQ_0110 0x6 +#define IRDMA_AE_SOURCE_CQ_1010 0xa +#define IRDMA_AE_SOURCE_CQ_1110 0xe + +#define IRDMA_AE_SOURCE_SQ 0x5 +#define IRDMA_AE_SOURCE_SQ_0111 0x7 + +#define IRDMA_AE_SOURCE_IN_RR_WR 0x9 +#define IRDMA_AE_SOURCE_IN_RR_WR_1011 0xb +#define IRDMA_AE_SOURCE_OUT_RR 0xd +#define IRDMA_AE_SOURCE_OUT_RR_1111 0xf + +#define IRDMA_TCP_STATE_NON_EXISTENT 0 +#define IRDMA_TCP_STATE_CLOSED 1 +#define IRDMA_TCP_STATE_LISTEN 2 +#define IRDMA_STATE_SYN_SEND 3 +#define IRDMA_TCP_STATE_SYN_RECEIVED 4 +#define IRDMA_TCP_STATE_ESTABLISHED 5 +#define IRDMA_TCP_STATE_CLOSE_WAIT 6 +#define IRDMA_TCP_STATE_FIN_WAIT_1 7 +#define IRDMA_TCP_STATE_CLOSING 8 +#define IRDMA_TCP_STATE_LAST_ACK 9 +#define IRDMA_TCP_STATE_FIN_WAIT_2 10 +#define IRDMA_TCP_STATE_TIME_WAIT 11 +#define IRDMA_TCP_STATE_RESERVED_1 12 +#define IRDMA_TCP_STATE_RESERVED_2 13 +#define IRDMA_TCP_STATE_RESERVED_3 14 +#define IRDMA_TCP_STATE_RESERVED_4 15 + +#define IRDMA_CQP_SW_SQSIZE_4 4 +#define IRDMA_CQP_SW_SQSIZE_2048 2048 + +#define IRDMA_CQ_TYPE_IWARP 1 +#define IRDMA_CQ_TYPE_ILQ 2 +#define IRDMA_CQ_TYPE_IEQ 3 +#define IRDMA_CQ_TYPE_CQP 4 +/* CQP SQ WQES */ +#define IRDMA_QP_TYPE_IWARP 1 +#define IRDMA_QP_TYPE_UDA 2 +#define IRDMA_QP_TYPE_ROCE_RC 3 +#define IRDMA_QP_TYPE_ROCE_UD 4 + +#define IRDMA_DONE_COUNT 1000 +#define IRDMA_SLEEP_COUNT 10 + +#define IRDMA_UPDATE_SD_BUFF_SIZE 128 +#define IRDMA_FEATURE_BUF_SIZE (8 * IRDMA_MAX_FEATURES) + +#define IRDMA_MAX_QUANTA_PER_WR 8 + +#define IRDMA_QP_SW_MAX_WQ_QUANTA 32768 +#define IRDMA_QP_SW_MAX_SQ_QUANTA 32768 +#define IRDMA_QP_SW_MAX_RQ_QUANTA 32768 +#define IRDMA_MAX_QP_WRS (((IRDMA_QP_SW_MAX_WQ_QUANTA - IRDMA_SQ_RSVD) / IRDMA_MAX_QUANTA_PER_WR)) + +#define IRDMAQP_TERM_SEND_TERM_AND_FIN 0 +#define IRDMAQP_TERM_SEND_TERM_ONLY 1 +#define IRDMAQP_TERM_SEND_FIN_ONLY 2 +#define IRDMAQP_TERM_DONOT_SEND_TERM_OR_FIN 3 + +#define IRDMA_HW_PAGE_SIZE 4096 +#define IRDMA_HW_PAGE_SHIFT 12 +#define IRDMA_CQE_QTYPE_RQ 0 +#define IRDMA_CQE_QTYPE_SQ 1 + +#define IRDMA_QP_SW_MIN_WQSIZE 8u /* in WRs*/ +#define IRDMA_QP_WQE_MIN_SIZE 32 +#define IRDMA_QP_WQE_MAX_SIZE 256 +#define IRDMA_QP_WQE_MIN_QUANTA 1 +#define IRDMA_MAX_RQ_WQE_SHIFT_GEN1 2 + +#define IRDMA_SQ_RSVD 258 +#define IRDMA_RQ_RSVD 1 + +#define IRDMA_FEATURE_RTS_AE 1ULL +#define IRDMA_FEATURE_CQ_RESIZE 2ULL + +#define IRDMAQP_OP_RDMA_WRITE 0x00 +#define IRDMAQP_OP_RDMA_READ 0x01 +#define IRDMAQP_OP_RDMA_SEND 0x03 +#define IRDMAQP_OP_RDMA_SEND_INV 0x04 +#define IRDMAQP_OP_RDMA_SEND_SOL_EVENT 0x05 +#define IRDMAQP_OP_RDMA_SEND_SOL_EVENT_INV 0x06 +#define IRDMAQP_OP_BIND_MW 0x08 +#define IRDMAQP_OP_FAST_REGISTER 0x09 +#define IRDMAQP_OP_LOCAL_INVALIDATE 0x0a +#define IRDMAQP_OP_RDMA_READ_LOC_INV 0x0b +#define IRDMAQP_OP_NOP 0x0c +#define IRDMAQP_OP_RDMA_WRITE_SOL 0x0d +#define IRDMAQP_OP_GEN_RTS_AE 0x30 + +#define IRDMA_OP_CEQ_DESTROY 1 +#define IRDMA_OP_AEQ_DESTROY 2 +#define IRDMA_OP_DELETE_ARP_CACHE_ENTRY 3 +#define IRDMA_OP_MANAGE_APBVT_ENTRY 4 +#define IRDMA_OP_CEQ_CREATE 5 +#define IRDMA_OP_AEQ_CREATE 6 +#define IRDMA_OP_MANAGE_QHASH_TABLE_ENTRY 7 +#define IRDMA_OP_QP_MODIFY 8 +#define IRDMA_OP_QP_UPLOAD_CONTEXT 9 +#define IRDMA_OP_CQ_CREATE 10 +#define IRDMA_OP_CQ_DESTROY 11 +#define IRDMA_OP_QP_CREATE 12 +#define IRDMA_OP_QP_DESTROY 13 +#define IRDMA_OP_ALLOC_STAG 14 +#define IRDMA_OP_MR_REG_NON_SHARED 15 +#define IRDMA_OP_DEALLOC_STAG 16 +#define IRDMA_OP_MW_ALLOC 17 +#define IRDMA_OP_QP_FLUSH_WQES 18 +#define IRDMA_OP_ADD_ARP_CACHE_ENTRY 19 +#define IRDMA_OP_MANAGE_PUSH_PAGE 20 +#define IRDMA_OP_UPDATE_PE_SDS 21 +#define IRDMA_OP_MANAGE_HMC_PM_FUNC_TABLE 22 +#define IRDMA_OP_SUSPEND 23 +#define IRDMA_OP_RESUME 24 +#define IRDMA_OP_MANAGE_VF_PBLE_BP 25 +#define IRDMA_OP_QUERY_FPM_VAL 26 +#define IRDMA_OP_COMMIT_FPM_VAL 27 +#define IRDMA_OP_REQ_CMDS 28 +#define IRDMA_OP_CMPL_CMDS 29 +#define IRDMA_OP_AH_CREATE 30 +#define IRDMA_OP_AH_MODIFY 31 +#define IRDMA_OP_AH_DESTROY 32 +#define IRDMA_OP_MC_CREATE 33 +#define IRDMA_OP_MC_DESTROY 34 +#define IRDMA_OP_MC_MODIFY 35 +#define IRDMA_OP_STATS_ALLOCATE 36 +#define IRDMA_OP_STATS_FREE 37 +#define IRDMA_OP_STATS_GATHER 38 +#define IRDMA_OP_WS_ADD_NODE 39 +#define IRDMA_OP_WS_MODIFY_NODE 40 +#define IRDMA_OP_WS_DELETE_NODE 41 +#define IRDMA_OP_SET_UP_MAP 42 +#define IRDMA_OP_GEN_AE 43 +#define IRDMA_OP_QUERY_RDMA_FEATURES 44 +#define IRDMA_OP_ALLOC_LOCAL_MAC_ENTRY 45 +#define IRDMA_OP_ADD_LOCAL_MAC_ENTRY 46 +#define IRDMA_OP_DELETE_LOCAL_MAC_ENTRY 47 +#define IRDMA_OP_CQ_MODIFY 48 +#define IRDMA_OP_SIZE_CQP_STAT_ARRAY 49 + +#define IRDMA_CQP_OP_CREATE_QP 0 +#define IRDMA_CQP_OP_MODIFY_QP 0x1 +#define IRDMA_CQP_OP_DESTROY_QP 0x02 +#define IRDMA_CQP_OP_CREATE_CQ 0x03 +#define IRDMA_CQP_OP_MODIFY_CQ 0x04 +#define IRDMA_CQP_OP_DESTROY_CQ 0x05 +#define IRDMA_CQP_OP_ALLOC_STAG 0x09 +#define IRDMA_CQP_OP_REG_MR 0x0a +#define IRDMA_CQP_OP_QUERY_STAG 0x0b +#define IRDMA_CQP_OP_REG_SMR 0x0c +#define IRDMA_CQP_OP_DEALLOC_STAG 0x0d +#define IRDMA_CQP_OP_MANAGE_LOC_MAC_TABLE 0x0e +#define IRDMA_CQP_OP_MANAGE_ARP 0x0f +#define IRDMA_CQP_OP_MANAGE_VF_PBLE_BP 0x10 +#define IRDMA_CQP_OP_MANAGE_PUSH_PAGES 0x11 +#define IRDMA_CQP_OP_QUERY_RDMA_FEATURES 0x12 +#define IRDMA_CQP_OP_UPLOAD_CONTEXT 0x13 +#define IRDMA_CQP_OP_ALLOCATE_LOC_MAC_TABLE_ENTRY 0x14 +#define IRDMA_CQP_OP_UPLOAD_CONTEXT 0x13 +#define IRDMA_CQP_OP_MANAGE_HMC_PM_FUNC_TABLE 0x15 +#define IRDMA_CQP_OP_CREATE_CEQ 0x16 +#define IRDMA_CQP_OP_DESTROY_CEQ 0x18 +#define IRDMA_CQP_OP_CREATE_AEQ 0x19 +#define IRDMA_CQP_OP_DESTROY_AEQ 0x1b +#define IRDMA_CQP_OP_CREATE_ADDR_HANDLE 0x1c +#define IRDMA_CQP_OP_MODIFY_ADDR_HANDLE 0x1d +#define IRDMA_CQP_OP_DESTROY_ADDR_HANDLE 0x1e +#define IRDMA_CQP_OP_UPDATE_PE_SDS 0x1f +#define IRDMA_CQP_OP_QUERY_FPM_VAL 0x20 +#define IRDMA_CQP_OP_COMMIT_FPM_VAL 0x21 +#define IRDMA_CQP_OP_FLUSH_WQES 0x22 +/* IRDMA_CQP_OP_GEN_AE is the same value as IRDMA_CQP_OP_FLUSH_WQES */ +#define IRDMA_CQP_OP_GEN_AE 0x22 +#define IRDMA_CQP_OP_MANAGE_APBVT 0x23 +#define IRDMA_CQP_OP_NOP 0x24 +#define IRDMA_CQP_OP_MANAGE_QUAD_HASH_TABLE_ENTRY 0x25 +#define IRDMA_CQP_OP_CREATE_MCAST_GRP 0x26 +#define IRDMA_CQP_OP_MODIFY_MCAST_GRP 0x27 +#define IRDMA_CQP_OP_DESTROY_MCAST_GRP 0x28 +#define IRDMA_CQP_OP_SUSPEND_QP 0x29 +#define IRDMA_CQP_OP_RESUME_QP 0x2a +#define IRDMA_CQP_OP_SHMC_PAGES_ALLOCATED 0x2b +#define IRDMA_CQP_OP_WORK_SCHED_NODE 0x2c +#define IRDMA_CQP_OP_MANAGE_STATS 0x2d +#define IRDMA_CQP_OP_GATHER_STATS 0x2e +#define IRDMA_CQP_OP_UP_MAP 0x2f + +/* Async Events codes */ +#define IRDMA_AE_AMP_UNALLOCATED_STAG 0x0102 +#define IRDMA_AE_AMP_INVALID_STAG 0x0103 +#define IRDMA_AE_AMP_BAD_QP 0x0104 +#define IRDMA_AE_AMP_BAD_PD 0x0105 +#define IRDMA_AE_AMP_BAD_STAG_KEY 0x0106 +#define IRDMA_AE_AMP_BAD_STAG_INDEX 0x0107 +#define IRDMA_AE_AMP_BOUNDS_VIOLATION 0x0108 +#define IRDMA_AE_AMP_RIGHTS_VIOLATION 0x0109 +#define IRDMA_AE_AMP_TO_WRAP 0x010a +#define IRDMA_AE_AMP_FASTREG_VALID_STAG 0x010c +#define IRDMA_AE_AMP_FASTREG_MW_STAG 0x010d +#define IRDMA_AE_AMP_FASTREG_INVALID_RIGHTS 0x010e +#define IRDMA_AE_AMP_FASTREG_INVALID_LENGTH 0x0110 +#define IRDMA_AE_AMP_INVALIDATE_SHARED 0x0111 +#define IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS 0x0112 +#define IRDMA_AE_AMP_INVALIDATE_MR_WITH_BOUND_WINDOWS 0x0113 +#define IRDMA_AE_AMP_MWBIND_VALID_STAG 0x0114 +#define IRDMA_AE_AMP_MWBIND_OF_MR_STAG 0x0115 +#define IRDMA_AE_AMP_MWBIND_TO_ZERO_BASED_STAG 0x0116 +#define IRDMA_AE_AMP_MWBIND_TO_MW_STAG 0x0117 +#define IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS 0x0118 +#define IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS 0x0119 +#define IRDMA_AE_AMP_MWBIND_TO_INVALID_PARENT 0x011a +#define IRDMA_AE_AMP_MWBIND_BIND_DISABLED 0x011b +#define IRDMA_AE_PRIV_OPERATION_DENIED 0x011c +#define IRDMA_AE_AMP_INVALIDATE_TYPE1_MW 0x011d +#define IRDMA_AE_AMP_MWBIND_ZERO_BASED_TYPE1_MW 0x011e +#define IRDMA_AE_AMP_FASTREG_INVALID_PBL_HPS_CFG 0x011f +#define IRDMA_AE_AMP_FASTREG_PBLE_MISMATCH 0x0121 +#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG 0x0132 +#define IRDMA_AE_UDA_XMIT_BAD_PD 0x0133 +#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT 0x0134 +#define IRDMA_AE_UDA_L4LEN_INVALID 0x0135 +#define IRDMA_AE_BAD_CLOSE 0x0201 +#define IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE 0x0202 +#define IRDMA_AE_CQ_OPERATION_ERROR 0x0203 +#define IRDMA_AE_RDMA_READ_WHILE_ORD_ZERO 0x0205 +#define IRDMA_AE_STAG_ZERO_INVALID 0x0206 +#define IRDMA_AE_IB_RREQ_AND_Q1_FULL 0x0207 +#define IRDMA_AE_IB_INVALID_REQUEST 0x0208 +#define IRDMA_AE_WQE_UNEXPECTED_OPCODE 0x020a +#define IRDMA_AE_WQE_INVALID_PARAMETER 0x020b +#define IRDMA_AE_WQE_INVALID_FRAG_DATA 0x020c +#define IRDMA_AE_IB_REMOTE_ACCESS_ERROR 0x020d +#define IRDMA_AE_IB_REMOTE_OP_ERROR 0x020e +#define IRDMA_AE_WQE_LSMM_TOO_LONG 0x0220 +#define IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN 0x0301 +#define IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER 0x0303 +#define IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION 0x0304 +#define IRDMA_AE_DDP_UBE_INVALID_MO 0x0305 +#define IRDMA_AE_DDP_UBE_INVALID_MSN_NO_BUFFER_AVAILABLE 0x0306 +#define IRDMA_AE_DDP_UBE_INVALID_QN 0x0307 +#define IRDMA_AE_DDP_NO_L_BIT 0x0308 +#define IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION 0x0311 +#define IRDMA_AE_RDMAP_ROE_UNEXPECTED_OPCODE 0x0312 +#define IRDMA_AE_ROE_INVALID_RDMA_READ_REQUEST 0x0313 +#define IRDMA_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP 0x0314 +#define IRDMA_AE_ROCE_RSP_LENGTH_ERROR 0x0316 +#define IRDMA_AE_ROCE_EMPTY_MCG 0x0380 +#define IRDMA_AE_ROCE_BAD_MC_IP_ADDR 0x0381 +#define IRDMA_AE_ROCE_BAD_MC_QPID 0x0382 +#define IRDMA_AE_MCG_QP_PROTOCOL_MISMATCH 0x0383 +#define IRDMA_AE_INVALID_ARP_ENTRY 0x0401 +#define IRDMA_AE_INVALID_TCP_OPTION_RCVD 0x0402 +#define IRDMA_AE_STALE_ARP_ENTRY 0x0403 +#define IRDMA_AE_INVALID_AH_ENTRY 0x0406 +#define IRDMA_AE_LLP_CLOSE_COMPLETE 0x0501 +#define IRDMA_AE_LLP_CONNECTION_RESET 0x0502 +#define IRDMA_AE_LLP_FIN_RECEIVED 0x0503 +#define IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR 0x0505 +#define IRDMA_AE_LLP_SEGMENT_TOO_SMALL 0x0507 +#define IRDMA_AE_LLP_SYN_RECEIVED 0x0508 +#define IRDMA_AE_LLP_TERMINATE_RECEIVED 0x0509 +#define IRDMA_AE_LLP_TOO_MANY_RETRIES 0x050a +#define IRDMA_AE_LLP_TOO_MANY_KEEPALIVE_RETRIES 0x050b +#define IRDMA_AE_LLP_DOUBT_REACHABILITY 0x050c +#define IRDMA_AE_LLP_CONNECTION_ESTABLISHED 0x050e +#define IRDMA_AE_RESOURCE_EXHAUSTION 0x0520 +#define IRDMA_AE_RESET_SENT 0x0601 +#define IRDMA_AE_TERMINATE_SENT 0x0602 +#define IRDMA_AE_RESET_NOT_SENT 0x0603 +#define IRDMA_AE_LCE_QP_CATASTROPHIC 0x0700 +#define IRDMA_AE_LCE_FUNCTION_CATASTROPHIC 0x0701 +#define IRDMA_AE_LCE_CQ_CATASTROPHIC 0x0702 +#define IRDMA_AE_QP_SUSPEND_COMPLETE 0x0900 + +#define LS_64_1(val, bits) ((u64)(uintptr_t)(val) << (bits)) +#define RS_64_1(val, bits) ((u64)(uintptr_t)(val) >> (bits)) +#define LS_32_1(val, bits) (u32)((val) << (bits)) +#define RS_32_1(val, bits) (u32)((val) >> (bits)) +#define LS_64(val, field) (((u64)(val) << field ## _S) & (field ## _M)) +#define RS_64(val, field) ((u64)((val) & field ## _M) >> field ## _S) +#define LS_32(val, field) (((val) << field ## _S) & (field ## _M)) +#define RS_32(val, field) (((val) & field ## _M) >> field ## _S) + +#define FLD_LS_64(dev, val, field) \ + (((u64)(val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field ## _M]) +#define FLD_RS_64(dev, val, field) \ + ((u64)((val) & (dev)->hw_masks[field ## _M]) >> (dev)->hw_shifts[field ## _S]) +#define FLD_LS_32(dev, val, field) \ + (((val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field ## _M]) +#define FLD_RS_32(dev, val, field) \ + ((u64)((val) & (dev)->hw_masks[field ## _M]) >> (dev)->hw_shifts[field ## _S]) + +#define FW_MAJOR_VER(dev) \ + ((u16)RS_64((dev)->feature_info[IRDMA_FEATURE_FW_INFO], IRDMA_FW_VER_MAJOR)) +#define FW_MINOR_VER(dev) \ + ((u16)RS_64((dev)->feature_info[IRDMA_FEATURE_FW_INFO], IRDMA_FW_VER_MINOR)) + +#define IRDMA_STATS_DELTA(a, b, c) ((a) >= (b) ? (a) - (b) : (a) + (c) - (b)) +#define IRDMA_MAX_STATS_32 0xFFFFFFFFULL +#define IRDMA_MAX_STATS_48 0xFFFFFFFFFFFFULL + +#define IRDMA_MAX_CQ_READ_THRESH 0x3FFFF +/* ILQ CQP hash table fields */ +#define IRDMA_CQPSQ_QHASH_VLANID_S 32 +#define IRDMA_CQPSQ_QHASH_VLANID_M \ + ((u64)0xfff << IRDMA_CQPSQ_QHASH_VLANID_S) + +#define IRDMA_CQPSQ_QHASH_QPN_S 32 +#define IRDMA_CQPSQ_QHASH_QPN_M \ + ((u64)0x3ffff << IRDMA_CQPSQ_QHASH_QPN_S) + +#define IRDMA_CQPSQ_QHASH_QS_HANDLE_S 0 +#define IRDMA_CQPSQ_QHASH_QS_HANDLE_M ((u64)0x3ff << IRDMA_CQPSQ_QHASH_QS_HANDLE_S) + +#define IRDMA_CQPSQ_QHASH_SRC_PORT_S 16 +#define IRDMA_CQPSQ_QHASH_SRC_PORT_M \ + ((u64)0xffff << IRDMA_CQPSQ_QHASH_SRC_PORT_S) + +#define IRDMA_CQPSQ_QHASH_DEST_PORT_S 0 +#define IRDMA_CQPSQ_QHASH_DEST_PORT_M \ + ((u64)0xffff << IRDMA_CQPSQ_QHASH_DEST_PORT_S) + +#define IRDMA_CQPSQ_QHASH_ADDR0_S 32 +#define IRDMA_CQPSQ_QHASH_ADDR0_M \ + ((u64)0xffffffff << IRDMA_CQPSQ_QHASH_ADDR0_S) + +#define IRDMA_CQPSQ_QHASH_ADDR1_S 0 +#define IRDMA_CQPSQ_QHASH_ADDR1_M \ + ((u64)0xffffffff << IRDMA_CQPSQ_QHASH_ADDR1_S) + +#define IRDMA_CQPSQ_QHASH_ADDR2_S 32 +#define IRDMA_CQPSQ_QHASH_ADDR2_M \ + ((u64)0xffffffff << IRDMA_CQPSQ_QHASH_ADDR2_S) + +#define IRDMA_CQPSQ_QHASH_ADDR3_S 0 +#define IRDMA_CQPSQ_QHASH_ADDR3_M \ + ((u64)0xffffffff << IRDMA_CQPSQ_QHASH_ADDR3_S) + +#define IRDMA_CQPSQ_QHASH_WQEVALID_S 63 +#define IRDMA_CQPSQ_QHASH_WQEVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QHASH_WQEVALID_S) +#define IRDMA_CQPSQ_QHASH_OPCODE_S 32 +#define IRDMA_CQPSQ_QHASH_OPCODE_M \ + ((u64)0x3f << IRDMA_CQPSQ_QHASH_OPCODE_S) + +#define IRDMA_CQPSQ_QHASH_MANAGE_S 61 +#define IRDMA_CQPSQ_QHASH_MANAGE_M \ + ((u64)0x3 << IRDMA_CQPSQ_QHASH_MANAGE_S) + +#define IRDMA_CQPSQ_QHASH_IPV4VALID_S 60 +#define IRDMA_CQPSQ_QHASH_IPV4VALID_M \ + BIT_ULL(IRDMA_CQPSQ_QHASH_IPV4VALID_S) + +#define IRDMA_CQPSQ_QHASH_VLANVALID_S 59 +#define IRDMA_CQPSQ_QHASH_VLANVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QHASH_VLANVALID_S) + +#define IRDMA_CQPSQ_QHASH_ENTRYTYPE_S 42 +#define IRDMA_CQPSQ_QHASH_ENTRYTYPE_M \ + ((u64)0x7 << IRDMA_CQPSQ_QHASH_ENTRYTYPE_S) + +/* Stats */ +#define IRDMA_CQPSQ_STATS_WQEVALID_S 63 +#define IRDMA_CQPSQ_STATS_WQEVALID_M \ + BIT_ULL(IRDMA_CQPSQ_STATS_WQEVALID_S) + +#define IRDMA_CQPSQ_STATS_ALLOC_INST_S 62 +#define IRDMA_CQPSQ_STATS_ALLOC_INST_M \ + BIT_ULL(IRDMA_CQPSQ_STATS_ALLOC_INST_S) + +#define IRDMA_CQPSQ_STATS_USE_HMC_FCN_INDEX_S 60 +#define IRDMA_CQPSQ_STATS_USE_HMC_FCN_INDEX_M \ + BIT_ULL(IRDMA_CQPSQ_STATS_USE_HMC_FCN_INDEX_S) + +#define IRDMA_CQPSQ_STATS_USE_INST_S 61 +#define IRDMA_CQPSQ_STATS_USE_INST_M \ + BIT_ULL(IRDMA_CQPSQ_STATS_USE_INST_S) + +#define IRDMA_CQPSQ_STATS_OP_S 32 +#define IRDMA_CQPSQ_STATS_OP_M \ + ((u64)0x3f << IRDMA_CQPSQ_STATS_OP_S) + +#define IRDMA_CQPSQ_STATS_INST_INDEX_S 0 +#define IRDMA_CQPSQ_STATS_INST_INDEX_M \ + ((u64)0x7f << IRDMA_CQPSQ_STATS_INST_INDEX_S) + +#define IRDMA_CQPSQ_STATS_HMC_FCN_INDEX_S 0 +#define IRDMA_CQPSQ_STATS_HMC_FCN_INDEX_M \ + ((u64)0x3f << IRDMA_CQPSQ_STATS_HMC_FCN_INDEX_S) + +/* WS - Work Scheduler */ +#define IRDMA_CQPSQ_WS_WQEVALID_S 63 +#define IRDMA_CQPSQ_WS_WQEVALID_M \ + BIT_ULL(IRDMA_CQPSQ_WS_WQEVALID_S) + +#define IRDMA_CQPSQ_WS_NODEOP_S 52 +#define IRDMA_CQPSQ_WS_NODEOP_M \ + ((u64)0x3 << IRDMA_CQPSQ_WS_NODEOP_S) + +#define IRDMA_CQPSQ_WS_ENABLENODE_S 62 +#define IRDMA_CQPSQ_WS_ENABLENODE_M \ + BIT_ULL(IRDMA_CQPSQ_WS_ENABLENODE_S) + +#define IRDMA_CQPSQ_WS_NODETYPE_S 61 +#define IRDMA_CQPSQ_WS_NODETYPE_M \ + BIT_ULL(IRDMA_CQPSQ_WS_NODETYPE_S) + +#define IRDMA_CQPSQ_WS_PRIOTYPE_S 59 +#define IRDMA_CQPSQ_WS_PRIOTYPE_M \ + ((u64)0x3 << IRDMA_CQPSQ_WS_PRIOTYPE_S) + +#define IRDMA_CQPSQ_WS_TC_S 56 +#define IRDMA_CQPSQ_WS_TC_M \ + ((u64)0x7 << IRDMA_CQPSQ_WS_TC_S) + +#define IRDMA_CQPSQ_WS_VMVFTYPE_S 54 +#define IRDMA_CQPSQ_WS_VMVFTYPE_M \ + ((u64)0x3 << IRDMA_CQPSQ_WS_VMVFTYPE_S) + +#define IRDMA_CQPSQ_WS_VMVFNUM_S 42 +#define IRDMA_CQPSQ_WS_VMVFNUM_M \ + ((u64)0x3ff << IRDMA_CQPSQ_WS_VMVFNUM_S) + +#define IRDMA_CQPSQ_WS_OP_S 32 +#define IRDMA_CQPSQ_WS_OP_M \ + ((u64)0x3f << IRDMA_CQPSQ_WS_OP_S) + +#define IRDMA_CQPSQ_WS_PARENTID_S 16 +#define IRDMA_CQPSQ_WS_PARENTID_M \ + ((u64)0x3ff << IRDMA_CQPSQ_WS_PARENTID_S) + +#define IRDMA_CQPSQ_WS_NODEID_S 0 +#define IRDMA_CQPSQ_WS_NODEID_M \ + ((u64)0x3ff << IRDMA_CQPSQ_WS_NODEID_S) + +#define IRDMA_CQPSQ_WS_VSI_S 48 +#define IRDMA_CQPSQ_WS_VSI_M \ + ((u64)0x3ff << IRDMA_CQPSQ_WS_VSI_S) + +#define IRDMA_CQPSQ_WS_WEIGHT_S 32 +#define IRDMA_CQPSQ_WS_WEIGHT_M \ + ((u64)0x7f << IRDMA_CQPSQ_WS_WEIGHT_S) + +/* UP to UP mapping */ +#define IRDMA_CQPSQ_UP_WQEVALID_S 63 +#define IRDMA_CQPSQ_UP_WQEVALID_M \ + BIT_ULL(IRDMA_CQPSQ_UP_WQEVALID_S) + +#define IRDMA_CQPSQ_UP_USEVLAN_S 62 +#define IRDMA_CQPSQ_UP_USEVLAN_M \ + BIT_ULL(IRDMA_CQPSQ_UP_USEVLAN_S) + +#define IRDMA_CQPSQ_UP_USEOVERRIDE_S 61 +#define IRDMA_CQPSQ_UP_USEOVERRIDE_M \ + BIT_ULL(IRDMA_CQPSQ_UP_USEOVERRIDE_S) + +#define IRDMA_CQPSQ_UP_OP_S 32 +#define IRDMA_CQPSQ_UP_OP_M \ + ((u64)0x3f << IRDMA_CQPSQ_UP_OP_S) + +#define IRDMA_CQPSQ_UP_HMCFCNIDX_S 0 +#define IRDMA_CQPSQ_UP_HMCFCNIDX_M \ + ((u64)0x3f << IRDMA_CQPSQ_UP_HMCFCNIDX_S) + +#define IRDMA_CQPSQ_UP_CNPOVERRIDE_S 32 +#define IRDMA_CQPSQ_UP_CNPOVERRIDE_M \ + ((u64)0x3f << IRDMA_CQPSQ_UP_CNPOVERRIDE_S) + +/* Query RDMA features*/ +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_WQEVALID_S 63 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_WQEVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QUERY_RDMA_FEATURES_WQEVALID_S) + +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_BUF_LEN_S 0 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_BUF_LEN_M \ + ((u64)0xffffffff << IRDMA_CQPSQ_QUERY_RDMA_FEATURES_BUF_LEN_S) + +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_OP_S 32 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_OP_M \ + ((u64)0x3f << IRDMA_CQPSQ_QUERY_RDMA_FEATURES_OP_S) + +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MODEL_USED_S 32 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MODEL_USED_M \ + (0xffffULL << IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MODEL_USED_S) + +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MAJOR_VERSION_S 16 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MAJOR_VERSION_M \ + (0xffULL << IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MAJOR_VERSION_S) + +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MINOR_VERSION_S 0 +#define IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MINOR_VERSION_M \ + (0xffULL << IRDMA_CQPSQ_QUERY_RDMA_FEATURES_HW_MINOR_VERSION_S) + +/* CQP Host Context */ +#define IRDMA_CQPHC_EN_DC_TCP_S 25 +#define IRDMA_CQPHC_EN_DC_TCP_M BIT_ULL(IRDMA_CQPHC_EN_DC_TCP_S) + +#define IRDMA_CQPHC_SQSIZE_S 8 +#define IRDMA_CQPHC_SQSIZE_M (0xfULL << IRDMA_CQPHC_SQSIZE_S) + +#define IRDMA_CQPHC_DISABLE_PFPDUS_S 1 +#define IRDMA_CQPHC_DISABLE_PFPDUS_M BIT_ULL(IRDMA_CQPHC_DISABLE_PFPDUS_S) + +#define IRDMA_CQPHC_ROCEV2_RTO_POLICY_S 2 +#define IRDMA_CQPHC_ROCEV2_RTO_POLICY_M BIT_ULL(IRDMA_CQPHC_ROCEV2_RTO_POLICY_S) + +#define IRDMA_CQPHC_PROTOCOL_USED_S 3 +#define IRDMA_CQPHC_PROTOCOL_USED_M (0x3ULL << IRDMA_CQPHC_PROTOCOL_USED_S) + +#define IRDMA_CQPHC_HW_MINVER_S 0 +#define IRDMA_CQPHC_HW_MINVER_M (0xffffULL << IRDMA_CQPHC_HW_MINVER_S) + +#define IRDMA_CQPHC_HW_MAJVER_S 16 +#define IRDMA_CQPHC_HW_MAJVER_M (0xffffULL << IRDMA_CQPHC_HW_MAJVER_S) + +#define IRDMA_CQPHC_STRUCTVER_S 24 +#define IRDMA_CQPHC_STRUCTVER_M (0xffULL << IRDMA_CQPHC_STRUCTVER_S) + +#define IRDMA_CQPHC_CEQPERVF_S 32 +#define IRDMA_CQPHC_CEQPERVF_M (0xffULL << IRDMA_CQPHC_CEQPERVF_S) + +#define IRDMA_CQPHC_ENABLED_VFS_S 32 +#define IRDMA_CQPHC_ENABLED_VFS_M (0x3fULL << IRDMA_CQPHC_ENABLED_VFS_S) + +#define IRDMA_CQPHC_HMC_PROFILE_S 0 +#define IRDMA_CQPHC_HMC_PROFILE_M (0x7ULL << IRDMA_CQPHC_HMC_PROFILE_S) + +#define IRDMA_CQPHC_SVER_S 24 +#define IRDMA_CQPHC_SVER_M (0xffULL << IRDMA_CQPHC_SVER_S) + +#define IRDMA_CQPHC_SQBASE_S 9 +#define IRDMA_CQPHC_SQBASE_M \ + (0xfffffffffffffeULL << IRDMA_CQPHC_SQBASE_S) + +#define IRDMA_CQPHC_QPCTX_S 0 +#define IRDMA_CQPHC_QPCTX_M \ + (0xffffffffffffffffULL << IRDMA_CQPHC_QPCTX_S) + +/* iWARP QP Doorbell shadow area */ +#define IRDMA_QP_DBSA_HW_SQ_TAIL_S 0 +#define IRDMA_QP_DBSA_HW_SQ_TAIL_M \ + (0x7fffULL << IRDMA_QP_DBSA_HW_SQ_TAIL_S) + +/* Completion Queue Doorbell shadow area */ +#define IRDMA_CQ_DBSA_CQEIDX_S 0 +#define IRDMA_CQ_DBSA_CQEIDX_M (0xfffffULL << IRDMA_CQ_DBSA_CQEIDX_S) + +#define IRDMA_CQ_DBSA_SW_CQ_SELECT_S 0 +#define IRDMA_CQ_DBSA_SW_CQ_SELECT_M \ + (0x3fffULL << IRDMA_CQ_DBSA_SW_CQ_SELECT_S) + +#define IRDMA_CQ_DBSA_ARM_NEXT_S 14 +#define IRDMA_CQ_DBSA_ARM_NEXT_M BIT_ULL(IRDMA_CQ_DBSA_ARM_NEXT_S) + +#define IRDMA_CQ_DBSA_ARM_NEXT_SE_S 15 +#define IRDMA_CQ_DBSA_ARM_NEXT_SE_M BIT_ULL(IRDMA_CQ_DBSA_ARM_NEXT_SE_S) + +#define IRDMA_CQ_DBSA_ARM_SEQ_NUM_S 16 +#define IRDMA_CQ_DBSA_ARM_SEQ_NUM_M \ + (0x3ULL << IRDMA_CQ_DBSA_ARM_SEQ_NUM_S) + +/* CQP and iWARP Completion Queue */ +#define IRDMA_CQ_QPCTX_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQ_QPCTX_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_CCQ_OPRETVAL_S 0 +#define IRDMA_CCQ_OPRETVAL_M (0xffffffffULL << IRDMA_CCQ_OPRETVAL_S) + +#define IRDMA_CQ_MINERR_S 0 +#define IRDMA_CQ_MINERR_M (0xffffULL << IRDMA_CQ_MINERR_S) + +#define IRDMA_CQ_MAJERR_S 16 +#define IRDMA_CQ_MAJERR_M (0xffffULL << IRDMA_CQ_MAJERR_S) + +#define IRDMA_CQ_WQEIDX_S 32 +#define IRDMA_CQ_WQEIDX_M (0x7fffULL << IRDMA_CQ_WQEIDX_S) + +#define IRDMA_CQ_EXTCQE_S 50 +#define IRDMA_CQ_EXTCQE_M BIT_ULL(IRDMA_CQ_EXTCQE_S) + +#define IRDMA_CQ_ERROR_S 55 +#define IRDMA_CQ_ERROR_M BIT_ULL(IRDMA_CQ_ERROR_S) + +#define IRDMA_CQ_SQ_S 62 +#define IRDMA_CQ_SQ_M BIT_ULL(IRDMA_CQ_SQ_S) + +#define IRDMA_CQ_VALID_S 63 +#define IRDMA_CQ_VALID_M BIT_ULL(IRDMA_CQ_VALID_S) + +#define IRDMA_CQ_IMMVALID_S 62 +#define IRDMA_CQ_IMMVALID_M BIT_ULL(IRDMA_CQ_IMMVALID_S) + +#define IRDMA_CQ_UDSMACVALID_S 61 +#define IRDMA_CQ_UDSMACVALID_M BIT_ULL(IRDMA_CQ_UDSMACVALID_S) + +#define IRDMA_CQ_UDVLANVALID_S 60 +#define IRDMA_CQ_UDVLANVALID_M BIT_ULL(IRDMA_CQ_UDVLANVALID_S) + +#define IRDMA_CQ_UDSMAC_S 0 +#define IRDMA_CQ_UDSMAC_M (0xffffffffffffULL << IRDMA_CQ_UDSMAC_S) + +#define IRDMA_CQ_UDVLAN_S 48 +#define IRDMA_CQ_UDVLAN_M (0xffffULL << IRDMA_CQ_UDVLAN_S) + +#define IRDMA_CQ_IMMDATA_S 0 +#define IRDMA_CQ_IMMDATA_M (0xffffffffffffffffULL << IRDMA_CQ_IMMVALID_S) + +#define IRDMA_CQ_IMMDATALOW32_S 0 +#define IRDMA_CQ_IMMDATALOW32_M (0xffffffffULL << IRDMA_CQ_IMMDATALOW32_S) + +#define IRDMA_CQ_IMMDATAUP32_S 32 +#define IRDMA_CQ_IMMDATAUP32_M (0xffffffffULL << IRDMA_CQ_IMMDATAUP32_S) + +#define IRDMACQ_PAYLDLEN_S 0 +#define IRDMACQ_PAYLDLEN_M (0xffffffffULL << IRDMACQ_PAYLDLEN_S) + +#define IRDMACQ_TCPSEQNUMRTT_S 32 +#define IRDMACQ_TCPSEQNUMRTT_M (0xffffffffULL << IRDMACQ_TCPSEQNUMRTT_S) + +#define IRDMACQ_INVSTAG_S 0 +#define IRDMACQ_INVSTAG_M (0xffffffffULL << IRDMACQ_INVSTAG_S) + +#define IRDMACQ_QPID_S 32 +#define IRDMACQ_QPID_M (0x3ffffULL << IRDMACQ_QPID_S) + +#define IRDMACQ_UDSRCQPN_S 0 +#define IRDMACQ_UDSRCQPN_M (0xffffffffULL << IRDMACQ_UDSRCQPN_S) + +#define IRDMACQ_PSHDROP_S 51 +#define IRDMACQ_PSHDROP_M BIT_ULL(IRDMACQ_PSHDROP_S) + +#define IRDMACQ_STAG_S 53 +#define IRDMACQ_STAG_M BIT_ULL(IRDMACQ_STAG_S) + +#define IRDMACQ_IPV4_S 53 +#define IRDMACQ_IPV4_M BIT_ULL(IRDMACQ_IPV4_S) + +#define IRDMACQ_SOEVENT_S 54 +#define IRDMACQ_SOEVENT_M BIT_ULL(IRDMACQ_SOEVENT_S) + +#define IRDMACQ_OP_S 56 +#define IRDMACQ_OP_M (0x3fULL << IRDMACQ_OP_S) + +/* CEQE format */ +#define IRDMA_CEQE_CQCTX_S 0 +#define IRDMA_CEQE_CQCTX_M \ + (0x7fffffffffffffffULL << IRDMA_CEQE_CQCTX_S) + +#define IRDMA_CEQE_VALID_S 63 +#define IRDMA_CEQE_VALID_M BIT_ULL(IRDMA_CEQE_VALID_S) + +/* AEQE format */ +#define IRDMA_AEQE_COMPCTX_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_AEQE_COMPCTX_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_AEQE_QPCQID_LOW_S 0 +#define IRDMA_AEQE_QPCQID_LOW_M (0x3ffffULL << IRDMA_AEQE_QPCQID_LOW_S) + +#define IRDMA_AEQE_QPCQID_HI_S 46 +#define IRDMA_AEQE_QPCQID_HI_M BIT_ULL(IRDMA_AEQE_QPCQID_HI_S) + +#define IRDMA_AEQE_WQDESCIDX_S 18 +#define IRDMA_AEQE_WQDESCIDX_M (0x7fffULL << IRDMA_AEQE_WQDESCIDX_S) + +#define IRDMA_AEQE_OVERFLOW_S 33 +#define IRDMA_AEQE_OVERFLOW_M BIT_ULL(IRDMA_AEQE_OVERFLOW_S) + +#define IRDMA_AEQE_AECODE_S 34 +#define IRDMA_AEQE_AECODE_M (0xfffULL << IRDMA_AEQE_AECODE_S) + +#define IRDMA_AEQE_AESRC_S 50 +#define IRDMA_AEQE_AESRC_M (0xfULL << IRDMA_AEQE_AESRC_S) + +#define IRDMA_AEQE_IWSTATE_S 54 +#define IRDMA_AEQE_IWSTATE_M (0x7ULL << IRDMA_AEQE_IWSTATE_S) + +#define IRDMA_AEQE_TCPSTATE_S 57 +#define IRDMA_AEQE_TCPSTATE_M (0xfULL << IRDMA_AEQE_TCPSTATE_S) + +#define IRDMA_AEQE_Q2DATA_S 61 +#define IRDMA_AEQE_Q2DATA_M (0x3ULL << IRDMA_AEQE_Q2DATA_S) + +#define IRDMA_AEQE_VALID_S 63 +#define IRDMA_AEQE_VALID_M BIT_ULL(IRDMA_AEQE_VALID_S) + +#define IRDMA_UDA_QPSQ_NEXT_HDR_S 16 +#define IRDMA_UDA_QPSQ_NEXT_HDR_M ((u64)0xff << IRDMA_UDA_QPSQ_NEXT_HDR_S) + +#define IRDMA_UDA_QPSQ_OPCODE_S 32 +#define IRDMA_UDA_QPSQ_OPCODE_M ((u64)0x3f << IRDMA_UDA_QPSQ_OPCODE_S) + +#define IRDMA_UDA_QPSQ_L4LEN_S 42 +#define IRDMA_UDA_QPSQ_L4LEN_M ((u64)0xf << IRDMA_UDA_QPSQ_L4LEN_S) + +#define IRDMA_GEN1_UDA_QPSQ_L4LEN_S 24 +#define IRDMA_GEN1_UDA_QPSQ_L4LEN_M ((u64)0xf << IRDMA_GEN1_UDA_QPSQ_L4LEN_S) + +#define IRDMA_UDA_QPSQ_AHIDX_S 0 +#define IRDMA_UDA_QPSQ_AHIDX_M ((u64)0x1ffff << IRDMA_UDA_QPSQ_AHIDX_S) + +#define IRDMA_UDA_QPSQ_VALID_S 63 +#define IRDMA_UDA_QPSQ_VALID_M \ + BIT_ULL(IRDMA_UDA_QPSQ_VALID_S) + +#define IRDMA_UDA_QPSQ_SIGCOMPL_S 62 +#define IRDMA_UDA_QPSQ_SIGCOMPL_M BIT_ULL(IRDMA_UDA_QPSQ_SIGCOMPL_S) + +#define IRDMA_UDA_QPSQ_MACLEN_S 56 +#define IRDMA_UDA_QPSQ_MACLEN_M \ + ((u64)0x7f << IRDMA_UDA_QPSQ_MACLEN_S) + +#define IRDMA_UDA_QPSQ_IPLEN_S 48 +#define IRDMA_UDA_QPSQ_IPLEN_M \ + ((u64)0x7f << IRDMA_UDA_QPSQ_IPLEN_S) + +#define IRDMA_UDA_QPSQ_L4T_S 30 +#define IRDMA_UDA_QPSQ_L4T_M \ + ((u64)0x3 << IRDMA_UDA_QPSQ_L4T_S) + +#define IRDMA_UDA_QPSQ_IIPT_S 28 +#define IRDMA_UDA_QPSQ_IIPT_M \ + ((u64)0x3 << IRDMA_UDA_QPSQ_IIPT_S) + +#define IRDMA_UDA_PAYLOADLEN_S 0 +#define IRDMA_UDA_PAYLOADLEN_M ((u64)0x3fff << IRDMA_UDA_PAYLOADLEN_S) + +#define IRDMA_UDA_HDRLEN_S 16 +#define IRDMA_UDA_HDRLEN_M ((u64)0x1ff << IRDMA_UDA_HDRLEN_S) + +#define IRDMA_VLAN_TAG_VALID_S 50 +#define IRDMA_VLAN_TAG_VALID_M BIT_ULL(IRDMA_VLAN_TAG_VALID_S) + +#define IRDMA_UDA_L3PROTO_S 0 +#define IRDMA_UDA_L3PROTO_M ((u64)0x3 << IRDMA_UDA_L3PROTO_S) + +#define IRDMA_UDA_L4PROTO_S 16 +#define IRDMA_UDA_L4PROTO_M ((u64)0x3 << IRDMA_UDA_L4PROTO_S) + +#define IRDMA_UDA_QPSQ_DOLOOPBACK_S 44 +#define IRDMA_UDA_QPSQ_DOLOOPBACK_M \ + BIT_ULL(IRDMA_UDA_QPSQ_DOLOOPBACK_S) + +/* CQP SQ WQE common fields */ +#define IRDMA_CQPSQ_BUFSIZE_S 0 +#define IRDMA_CQPSQ_BUFSIZE_M (0xffffffffULL << IRDMA_CQPSQ_BUFSIZE_S) + +#define IRDMA_CQPSQ_OPCODE_S 32 +#define IRDMA_CQPSQ_OPCODE_M (0x3fULL << IRDMA_CQPSQ_OPCODE_S) + +#define IRDMA_CQPSQ_WQEVALID_S 63 +#define IRDMA_CQPSQ_WQEVALID_M BIT_ULL(IRDMA_CQPSQ_WQEVALID_S) + +#define IRDMA_CQPSQ_TPHVAL_S 0 +#define IRDMA_CQPSQ_TPHVAL_M (0xffULL << IRDMA_CQPSQ_TPHVAL_S) + +#define IRDMA_CQPSQ_VSIIDX_S 8 +#define IRDMA_CQPSQ_VSIIDX_M (0x3ffULL << IRDMA_CQPSQ_VSIIDX_S) + +#define IRDMA_CQPSQ_TPHEN_S 60 +#define IRDMA_CQPSQ_TPHEN_M BIT_ULL(IRDMA_CQPSQ_TPHEN_S) + +#define IRDMA_CQPSQ_PBUFADDR_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_PBUFADDR_M IRDMA_CQPHC_QPCTX_M + +/* Create/Modify/Destroy QP */ + +#define IRDMA_CQPSQ_QP_NEWMSS_S 32 +#define IRDMA_CQPSQ_QP_NEWMSS_M (0x3fffULL << IRDMA_CQPSQ_QP_NEWMSS_S) + +#define IRDMA_CQPSQ_QP_TERMLEN_S 48 +#define IRDMA_CQPSQ_QP_TERMLEN_M (0xfULL << IRDMA_CQPSQ_QP_TERMLEN_S) + +#define IRDMA_CQPSQ_QP_QPCTX_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_QP_QPCTX_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_CQPSQ_QP_QPID_S 0 +#define IRDMA_CQPSQ_QP_QPID_M (0x3FFFFUL) + +#define IRDMA_CQPSQ_QP_OP_S 32 +#define IRDMA_CQPSQ_QP_OP_M IRDMACQ_OP_M + +#define IRDMA_CQPSQ_QP_ORDVALID_S 42 +#define IRDMA_CQPSQ_QP_ORDVALID_M BIT_ULL(IRDMA_CQPSQ_QP_ORDVALID_S) + +#define IRDMA_CQPSQ_QP_TOECTXVALID_S 43 +#define IRDMA_CQPSQ_QP_TOECTXVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QP_TOECTXVALID_S) + +#define IRDMA_CQPSQ_QP_CACHEDVARVALID_S 44 +#define IRDMA_CQPSQ_QP_CACHEDVARVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QP_CACHEDVARVALID_S) + +#define IRDMA_CQPSQ_QP_VQ_S 45 +#define IRDMA_CQPSQ_QP_VQ_M BIT_ULL(IRDMA_CQPSQ_QP_VQ_S) + +#define IRDMA_CQPSQ_QP_FORCELOOPBACK_S 46 +#define IRDMA_CQPSQ_QP_FORCELOOPBACK_M \ + BIT_ULL(IRDMA_CQPSQ_QP_FORCELOOPBACK_S) + +#define IRDMA_CQPSQ_QP_CQNUMVALID_S 47 +#define IRDMA_CQPSQ_QP_CQNUMVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QP_CQNUMVALID_S) + +#define IRDMA_CQPSQ_QP_QPTYPE_S 48 +#define IRDMA_CQPSQ_QP_QPTYPE_M (0x7ULL << IRDMA_CQPSQ_QP_QPTYPE_S) + +#define IRDMA_CQPSQ_QP_MACVALID_S 51 +#define IRDMA_CQPSQ_QP_MACVALID_M BIT_ULL(IRDMA_CQPSQ_QP_MACVALID_S) + +#define IRDMA_CQPSQ_QP_MSSCHANGE_S 52 +#define IRDMA_CQPSQ_QP_MSSCHANGE_M BIT_ULL(IRDMA_CQPSQ_QP_MSSCHANGE_S) + +#define IRDMA_CQPSQ_QP_IGNOREMWBOUND_S 54 +#define IRDMA_CQPSQ_QP_IGNOREMWBOUND_M \ + BIT_ULL(IRDMA_CQPSQ_QP_IGNOREMWBOUND_S) + +#define IRDMA_CQPSQ_QP_REMOVEHASHENTRY_S 55 +#define IRDMA_CQPSQ_QP_REMOVEHASHENTRY_M \ + BIT_ULL(IRDMA_CQPSQ_QP_REMOVEHASHENTRY_S) + +#define IRDMA_CQPSQ_QP_TERMACT_S 56 +#define IRDMA_CQPSQ_QP_TERMACT_M (0x3ULL << IRDMA_CQPSQ_QP_TERMACT_S) + +#define IRDMA_CQPSQ_QP_RESETCON_S 58 +#define IRDMA_CQPSQ_QP_RESETCON_M BIT_ULL(IRDMA_CQPSQ_QP_RESETCON_S) + +#define IRDMA_CQPSQ_QP_ARPTABIDXVALID_S 59 +#define IRDMA_CQPSQ_QP_ARPTABIDXVALID_M \ + BIT_ULL(IRDMA_CQPSQ_QP_ARPTABIDXVALID_S) + +#define IRDMA_CQPSQ_QP_NEXTIWSTATE_S 60 +#define IRDMA_CQPSQ_QP_NEXTIWSTATE_M \ + (0x7ULL << IRDMA_CQPSQ_QP_NEXTIWSTATE_S) + +#define IRDMA_CQPSQ_QP_DBSHADOWADDR_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_QP_DBSHADOWADDR_M IRDMA_CQPHC_QPCTX_M + +/* Create/Modify/Destroy CQ */ +#define IRDMA_CQPSQ_CQ_CQSIZE_S 0 +#define IRDMA_CQPSQ_CQ_CQSIZE_M (0x1fffffULL << IRDMA_CQPSQ_CQ_CQSIZE_S) + +#define IRDMA_CQPSQ_CQ_CQCTX_S 0 +#define IRDMA_CQPSQ_CQ_CQCTX_M \ + (0x7fffffffffffffffULL << IRDMA_CQPSQ_CQ_CQCTX_S) + +#define IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD_S 0 +#define IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD_M \ + (0x3ffff << IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD_S) + +#define IRDMA_CQPSQ_CQ_OP_S 32 +#define IRDMA_CQPSQ_CQ_OP_M (0x3fULL << IRDMA_CQPSQ_CQ_OP_S) + +#define IRDMA_CQPSQ_CQ_CQRESIZE_S 43 +#define IRDMA_CQPSQ_CQ_CQRESIZE_M BIT_ULL(IRDMA_CQPSQ_CQ_CQRESIZE_S) + +#define IRDMA_CQPSQ_CQ_LPBLSIZE_S 44 +#define IRDMA_CQPSQ_CQ_LPBLSIZE_M (3ULL << IRDMA_CQPSQ_CQ_LPBLSIZE_S) + +#define IRDMA_CQPSQ_CQ_CHKOVERFLOW_S 46 +#define IRDMA_CQPSQ_CQ_CHKOVERFLOW_M \ + BIT_ULL(IRDMA_CQPSQ_CQ_CHKOVERFLOW_S) + +#define IRDMA_CQPSQ_CQ_VIRTMAP_S 47 +#define IRDMA_CQPSQ_CQ_VIRTMAP_M BIT_ULL(IRDMA_CQPSQ_CQ_VIRTMAP_S) + +#define IRDMA_CQPSQ_CQ_ENCEQEMASK_S 48 +#define IRDMA_CQPSQ_CQ_ENCEQEMASK_M \ + BIT_ULL(IRDMA_CQPSQ_CQ_ENCEQEMASK_S) + +#define IRDMA_CQPSQ_CQ_CEQIDVALID_S 49 +#define IRDMA_CQPSQ_CQ_CEQIDVALID_M \ + BIT_ULL(IRDMA_CQPSQ_CQ_CEQIDVALID_S) + +#define IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT_S 61 +#define IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT_M \ + BIT_ULL(IRDMA_CQPSQ_CQ_AVOIDMEMCNFLCT_S) + +#define IRDMA_CQPSQ_CQ_FIRSTPMPBLIDX_S 0 +#define IRDMA_CQPSQ_CQ_FIRSTPMPBLIDX_M \ + (0xfffffffULL << IRDMA_CQPSQ_CQ_FIRSTPMPBLIDX_S) + +/* Allocate/Register/Register Shared/Deallocate Stag */ +#define IRDMA_CQPSQ_STAG_VA_FBO_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_STAG_VA_FBO_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_CQPSQ_STAG_STAGLEN_S 0 +#define IRDMA_CQPSQ_STAG_STAGLEN_M \ + (0x3fffffffffffULL << IRDMA_CQPSQ_STAG_STAGLEN_S) + +#define IRDMA_CQPSQ_STAG_KEY_S 0 +#define IRDMA_CQPSQ_STAG_KEY_M (0xffULL << IRDMA_CQPSQ_STAG_KEY_S) + +#define IRDMA_CQPSQ_STAG_IDX_S 8 +#define IRDMA_CQPSQ_STAG_IDX_M (0xffffffULL << IRDMA_CQPSQ_STAG_IDX_S) + +#define IRDMA_CQPSQ_STAG_PARENTSTAGIDX_S 32 +#define IRDMA_CQPSQ_STAG_PARENTSTAGIDX_M \ + (0xffffffULL << IRDMA_CQPSQ_STAG_PARENTSTAGIDX_S) + +#define IRDMA_CQPSQ_STAG_MR_S 43 +#define IRDMA_CQPSQ_STAG_MR_M BIT_ULL(IRDMA_CQPSQ_STAG_MR_S) + +#define IRDMA_CQPSQ_STAG_MWTYPE_S 42 +#define IRDMA_CQPSQ_STAG_MWTYPE_M BIT_ULL(IRDMA_CQPSQ_STAG_MWTYPE_S) + +#define IRDMA_CQPSQ_STAG_MW1_BIND_DONT_VLDT_KEY_S 58 +#define IRDMA_CQPSQ_STAG_MW1_BIND_DONT_VLDT_KEY_M \ + BIT_ULL(IRDMA_CQPSQ_STAG_MW1_BIND_DONT_VLDT_KEY_S) + +#define IRDMA_CQPSQ_STAG_LPBLSIZE_S IRDMA_CQPSQ_CQ_LPBLSIZE_S +#define IRDMA_CQPSQ_STAG_LPBLSIZE_M IRDMA_CQPSQ_CQ_LPBLSIZE_M + +#define IRDMA_CQPSQ_STAG_HPAGESIZE_S 46 +#define IRDMA_CQPSQ_STAG_HPAGESIZE_M \ + ((u64)3 << IRDMA_CQPSQ_STAG_HPAGESIZE_S) + +#define IRDMA_CQPSQ_STAG_ARIGHTS_S 48 +#define IRDMA_CQPSQ_STAG_ARIGHTS_M \ + (0x1fULL << IRDMA_CQPSQ_STAG_ARIGHTS_S) + +#define IRDMA_CQPSQ_STAG_REMACCENABLED_S 53 +#define IRDMA_CQPSQ_STAG_REMACCENABLED_M \ + BIT_ULL(IRDMA_CQPSQ_STAG_REMACCENABLED_S) + +#define IRDMA_CQPSQ_STAG_VABASEDTO_S 59 +#define IRDMA_CQPSQ_STAG_VABASEDTO_M \ + BIT_ULL(IRDMA_CQPSQ_STAG_VABASEDTO_S) + +#define IRDMA_CQPSQ_STAG_USEHMCFNIDX_S 60 +#define IRDMA_CQPSQ_STAG_USEHMCFNIDX_M \ + BIT_ULL(IRDMA_CQPSQ_STAG_USEHMCFNIDX_S) + +#define IRDMA_CQPSQ_STAG_USEPFRID_S 61 +#define IRDMA_CQPSQ_STAG_USEPFRID_M \ + BIT_ULL(IRDMA_CQPSQ_STAG_USEPFRID_S) + +#define IRDMA_CQPSQ_STAG_PBA_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_STAG_PBA_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_CQPSQ_STAG_HMCFNIDX_S 0 +#define IRDMA_CQPSQ_STAG_HMCFNIDX_M \ + (0x3fULL << IRDMA_CQPSQ_STAG_HMCFNIDX_S) + +#define IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX_S 0 +#define IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX_M \ + (0xfffffffULL << IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX_S) + +#define IRDMA_CQPSQ_QUERYSTAG_IDX_S IRDMA_CQPSQ_STAG_IDX_S +#define IRDMA_CQPSQ_QUERYSTAG_IDX_M IRDMA_CQPSQ_STAG_IDX_M + +/* Manage Local MAC Table - MLM */ +#define IRDMA_CQPSQ_MLM_TABLEIDX_S 0 +#define IRDMA_CQPSQ_MLM_TABLEIDX_M \ + (0x3fULL << IRDMA_CQPSQ_MLM_TABLEIDX_S) + +#define IRDMA_CQPSQ_MLM_FREEENTRY_S 62 +#define IRDMA_CQPSQ_MLM_FREEENTRY_M \ + BIT_ULL(IRDMA_CQPSQ_MLM_FREEENTRY_S) + +#define IRDMA_CQPSQ_MLM_IGNORE_REF_CNT_S 61 +#define IRDMA_CQPSQ_MLM_IGNORE_REF_CNT_M \ + BIT_ULL(IRDMA_CQPSQ_MLM_IGNORE_REF_CNT_S) + +#define IRDMA_CQPSQ_MLM_MAC0_S 0 +#define IRDMA_CQPSQ_MLM_MAC0_M (0xffULL << IRDMA_CQPSQ_MLM_MAC0_S) + +#define IRDMA_CQPSQ_MLM_MAC1_S 8 +#define IRDMA_CQPSQ_MLM_MAC1_M (0xffULL << IRDMA_CQPSQ_MLM_MAC1_S) + +#define IRDMA_CQPSQ_MLM_MAC2_S 16 +#define IRDMA_CQPSQ_MLM_MAC2_M (0xffULL << IRDMA_CQPSQ_MLM_MAC2_S) + +#define IRDMA_CQPSQ_MLM_MAC3_S 24 +#define IRDMA_CQPSQ_MLM_MAC3_M (0xffULL << IRDMA_CQPSQ_MLM_MAC3_S) + +#define IRDMA_CQPSQ_MLM_MAC4_S 32 +#define IRDMA_CQPSQ_MLM_MAC4_M (0xffULL << IRDMA_CQPSQ_MLM_MAC4_S) + +#define IRDMA_CQPSQ_MLM_MAC5_S 40 +#define IRDMA_CQPSQ_MLM_MAC5_M (0xffULL << IRDMA_CQPSQ_MLM_MAC5_S) + +/* Manage ARP Table - MAT */ +#define IRDMA_CQPSQ_MAT_REACHMAX_S 0 +#define IRDMA_CQPSQ_MAT_REACHMAX_M \ + (0xffffffffULL << IRDMA_CQPSQ_MAT_REACHMAX_S) + +#define IRDMA_CQPSQ_MAT_MACADDR_S 0 +#define IRDMA_CQPSQ_MAT_MACADDR_M \ + (0xffffffffffffULL << IRDMA_CQPSQ_MAT_MACADDR_S) + +#define IRDMA_CQPSQ_MAT_ARPENTRYIDX_S 0 +#define IRDMA_CQPSQ_MAT_ARPENTRYIDX_M \ + (0xfffULL << IRDMA_CQPSQ_MAT_ARPENTRYIDX_S) + +#define IRDMA_CQPSQ_MAT_ENTRYVALID_S 42 +#define IRDMA_CQPSQ_MAT_ENTRYVALID_M \ + BIT_ULL(IRDMA_CQPSQ_MAT_ENTRYVALID_S) + +#define IRDMA_CQPSQ_MAT_PERMANENT_S 43 +#define IRDMA_CQPSQ_MAT_PERMANENT_M \ + BIT_ULL(IRDMA_CQPSQ_MAT_PERMANENT_S) + +#define IRDMA_CQPSQ_MAT_QUERY_S 44 +#define IRDMA_CQPSQ_MAT_QUERY_M BIT_ULL(IRDMA_CQPSQ_MAT_QUERY_S) + +/* Manage VF PBLE Backing Pages - MVPBP*/ +#define IRDMA_CQPSQ_MVPBP_PD_ENTRY_CNT_S 0 +#define IRDMA_CQPSQ_MVPBP_PD_ENTRY_CNT_M \ + (0x3ffULL << IRDMA_CQPSQ_MVPBP_PD_ENTRY_CNT_S) + +#define IRDMA_CQPSQ_MVPBP_FIRST_PD_INX_S 16 +#define IRDMA_CQPSQ_MVPBP_FIRST_PD_INX_M \ + (0x1ffULL << IRDMA_CQPSQ_MVPBP_FIRST_PD_INX_S) + +#define IRDMA_CQPSQ_MVPBP_SD_INX_S 32 +#define IRDMA_CQPSQ_MVPBP_SD_INX_M \ + (0xfffULL << IRDMA_CQPSQ_MVPBP_SD_INX_S) + +#define IRDMA_CQPSQ_MVPBP_INV_PD_ENT_S 62 +#define IRDMA_CQPSQ_MVPBP_INV_PD_ENT_M \ + BIT_ULL(IRDMA_CQPSQ_MVPBP_INV_PD_ENT_S) + +#define IRDMA_CQPSQ_MVPBP_PD_PLPBA_S 3 +#define IRDMA_CQPSQ_MVPBP_PD_PLPBA_M \ + (0x1fffffffffffffffULL << IRDMA_CQPSQ_MVPBP_PD_PLPBA_S) + +/* Manage Push Page - MPP */ +#define IRDMA_INVALID_PUSH_PAGE_INDEX 0xffff + +#define IRDMA_CQPSQ_MPP_QS_HANDLE_S 0 +#define IRDMA_CQPSQ_MPP_QS_HANDLE_M \ + (0x3ffULL << IRDMA_CQPSQ_MPP_QS_HANDLE_S) + +#define IRDMA_CQPSQ_MPP_PPIDX_S 0 +#define IRDMA_CQPSQ_MPP_PPIDX_M (0x3ffULL << IRDMA_CQPSQ_MPP_PPIDX_S) + +#define IRDMA_CQPSQ_MPP_PPTYPE_S 60 +#define IRDMA_CQPSQ_MPP_PPTYPE_M (0x3ULL << IRDMA_CQPSQ_MPP_PPTYPE_S) + +#define IRDMA_CQPSQ_MPP_FREE_PAGE_S 62 +#define IRDMA_CQPSQ_MPP_FREE_PAGE_M BIT_ULL(IRDMA_CQPSQ_MPP_FREE_PAGE_S) + +/* Upload Context - UCTX */ +#define IRDMA_CQPSQ_UCTX_QPCTXADDR_S IRDMA_CQPHC_QPCTX_S +#define IRDMA_CQPSQ_UCTX_QPCTXADDR_M IRDMA_CQPHC_QPCTX_M + +#define IRDMA_CQPSQ_UCTX_QPID_S 0 +#define IRDMA_CQPSQ_UCTX_QPID_M (0x3ffffULL << IRDMA_CQPSQ_UCTX_QPID_S) + +#define IRDMA_CQPSQ_UCTX_QPTYPE_S 48 +#define IRDMA_CQPSQ_UCTX_QPTYPE_M (0xfULL << IRDMA_CQPSQ_UCTX_QPTYPE_S) + +#define IRDMA_CQPSQ_UCTX_RAWFORMAT_S 61 +#define IRDMA_CQPSQ_UCTX_RAWFORMAT_M \ + BIT_ULL(IRDMA_CQPSQ_UCTX_RAWFORMAT_S) + +#define IRDMA_CQPSQ_UCTX_FREEZEQP_S 62 +#define IRDMA_CQPSQ_UCTX_FREEZEQP_M \ + BIT_ULL(IRDMA_CQPSQ_UCTX_FREEZEQP_S) + +/* Manage HMC PM Function Table - MHMC */ +#define IRDMA_CQPSQ_MHMC_VFIDX_S 0 +#define IRDMA_CQPSQ_MHMC_VFIDX_M (0xffffULL << IRDMA_CQPSQ_MHMC_VFIDX_S) + +#define IRDMA_CQPSQ_MHMC_FREEPMFN_S 62 +#define IRDMA_CQPSQ_MHMC_FREEPMFN_M \ + BIT_ULL(IRDMA_CQPSQ_MHMC_FREEPMFN_S) + +/* Set HMC Resource Profile - SHMCRP */ +#define IRDMA_CQPSQ_SHMCRP_HMC_PROFILE_S 0 +#define IRDMA_CQPSQ_SHMCRP_HMC_PROFILE_M \ + (0x7ULL << IRDMA_CQPSQ_SHMCRP_HMC_PROFILE_S) +#define IRDMA_CQPSQ_SHMCRP_VFNUM_S 32 +#define IRDMA_CQPSQ_SHMCRP_VFNUM_M (0x3fULL << IRDMA_CQPSQ_SHMCRP_VFNUM_S) + +/* Create/Destroy CEQ */ +#define IRDMA_CQPSQ_CEQ_CEQSIZE_S 0 +#define IRDMA_CQPSQ_CEQ_CEQSIZE_M \ + (0x1ffffULL << IRDMA_CQPSQ_CEQ_CEQSIZE_S) + +#define IRDMA_CQPSQ_CEQ_CEQID_S 0 +#define IRDMA_CQPSQ_CEQ_CEQID_M (0x3ffULL << IRDMA_CQPSQ_CEQ_CEQID_S) + +#define IRDMA_CQPSQ_CEQ_LPBLSIZE_S IRDMA_CQPSQ_CQ_LPBLSIZE_S +#define IRDMA_CQPSQ_CEQ_LPBLSIZE_M IRDMA_CQPSQ_CQ_LPBLSIZE_M + +#define IRDMA_CQPSQ_CEQ_VMAP_S 47 +#define IRDMA_CQPSQ_CEQ_VMAP_M BIT_ULL(IRDMA_CQPSQ_CEQ_VMAP_S) + +#define IRDMA_CQPSQ_CEQ_ITRNOEXPIRE_S 46 +#define IRDMA_CQPSQ_CEQ_ITRNOEXPIRE_M BIT_ULL(IRDMA_CQPSQ_CEQ_ITRNOEXPIRE_S) + +#define IRDMA_CQPSQ_CEQ_FIRSTPMPBLIDX_S 0 +#define IRDMA_CQPSQ_CEQ_FIRSTPMPBLIDX_M \ + (0xfffffffULL << IRDMA_CQPSQ_CEQ_FIRSTPMPBLIDX_S) + +/* Create/Destroy AEQ */ +#define IRDMA_CQPSQ_AEQ_AEQECNT_S 0 +#define IRDMA_CQPSQ_AEQ_AEQECNT_M \ + (0x7ffffULL << IRDMA_CQPSQ_AEQ_AEQECNT_S) + +#define IRDMA_CQPSQ_AEQ_LPBLSIZE_S IRDMA_CQPSQ_CQ_LPBLSIZE_S +#define IRDMA_CQPSQ_AEQ_LPBLSIZE_M IRDMA_CQPSQ_CQ_LPBLSIZE_M + +#define IRDMA_CQPSQ_AEQ_VMAP_S 47 +#define IRDMA_CQPSQ_AEQ_VMAP_M BIT_ULL(IRDMA_CQPSQ_AEQ_VMAP_S) + +#define IRDMA_CQPSQ_AEQ_FIRSTPMPBLIDX_S 0 +#define IRDMA_CQPSQ_AEQ_FIRSTPMPBLIDX_M \ + (0xfffffffULL << IRDMA_CQPSQ_AEQ_FIRSTPMPBLIDX_S) + +/* Commit FPM Values - CFPM */ +#define IRDMA_COMMIT_FPM_QPCNT_S 0 +#define IRDMA_COMMIT_FPM_QPCNT_M (0x7ffffULL << IRDMA_COMMIT_FPM_QPCNT_S) + +#define IRDMA_COMMIT_FPM_CQCNT_S 0 +#define IRDMA_COMMIT_FPM_CQCNT_M (0xfffffULL << IRDMA_COMMIT_FPM_CQCNT_S) + +#define IRDMA_COMMIT_FPM_BASE_S 32 + +#define IRDMA_CQPSQ_CFPM_HMCFNID_S 0 +#define IRDMA_CQPSQ_CFPM_HMCFNID_M (0x3fULL << IRDMA_CQPSQ_CFPM_HMCFNID_S) + +/* Flush WQEs - FWQE */ +#define IRDMA_CQPSQ_FWQE_AECODE_S 0 +#define IRDMA_CQPSQ_FWQE_AECODE_M (0xffffULL << IRDMA_CQPSQ_FWQE_AECODE_S) + +#define IRDMA_CQPSQ_FWQE_AESOURCE_S 16 +#define IRDMA_CQPSQ_FWQE_AESOURCE_M \ + (0xfULL << IRDMA_CQPSQ_FWQE_AESOURCE_S) + +#define IRDMA_CQPSQ_FWQE_RQMNERR_S 0 +#define IRDMA_CQPSQ_FWQE_RQMNERR_M \ + (0xffffULL << IRDMA_CQPSQ_FWQE_RQMNERR_S) + +#define IRDMA_CQPSQ_FWQE_RQMJERR_S 16 +#define IRDMA_CQPSQ_FWQE_RQMJERR_M \ + (0xffffULL << IRDMA_CQPSQ_FWQE_RQMJERR_S) + +#define IRDMA_CQPSQ_FWQE_SQMNERR_S 32 +#define IRDMA_CQPSQ_FWQE_SQMNERR_M \ + (0xffffULL << IRDMA_CQPSQ_FWQE_SQMNERR_S) + +#define IRDMA_CQPSQ_FWQE_SQMJERR_S 48 +#define IRDMA_CQPSQ_FWQE_SQMJERR_M \ + (0xffffULL << IRDMA_CQPSQ_FWQE_SQMJERR_S) + +#define IRDMA_CQPSQ_FWQE_QPID_S 0 +#define IRDMA_CQPSQ_FWQE_QPID_M (0x3ffffULL << IRDMA_CQPSQ_FWQE_QPID_S) + +#define IRDMA_CQPSQ_FWQE_GENERATE_AE_S 59 +#define IRDMA_CQPSQ_FWQE_GENERATE_AE_M \ + BIT_ULL(IRDMA_CQPSQ_FWQE_GENERATE_AE_S) + +#define IRDMA_CQPSQ_FWQE_USERFLCODE_S 60 +#define IRDMA_CQPSQ_FWQE_USERFLCODE_M \ + BIT_ULL(IRDMA_CQPSQ_FWQE_USERFLCODE_S) + +#define IRDMA_CQPSQ_FWQE_FLUSHSQ_S 61 +#define IRDMA_CQPSQ_FWQE_FLUSHSQ_M BIT_ULL(IRDMA_CQPSQ_FWQE_FLUSHSQ_S) + +#define IRDMA_CQPSQ_FWQE_FLUSHRQ_S 62 +#define IRDMA_CQPSQ_FWQE_FLUSHRQ_M BIT_ULL(IRDMA_CQPSQ_FWQE_FLUSHRQ_S) + +/* Manage Accelerated Port Table - MAPT */ +#define IRDMA_CQPSQ_MAPT_PORT_S 0 +#define IRDMA_CQPSQ_MAPT_PORT_M (0xffffULL << IRDMA_CQPSQ_MAPT_PORT_S) + +#define IRDMA_CQPSQ_MAPT_ADDPORT_S 62 +#define IRDMA_CQPSQ_MAPT_ADDPORT_M BIT_ULL(IRDMA_CQPSQ_MAPT_ADDPORT_S) + +/* Update Protocol Engine SDs */ +#define IRDMA_CQPSQ_UPESD_SDCMD_S 0 +#define IRDMA_CQPSQ_UPESD_SDCMD_M (0xffffffffULL << IRDMA_CQPSQ_UPESD_SDCMD_S) + +#define IRDMA_CQPSQ_UPESD_SDDATALOW_S 0 +#define IRDMA_CQPSQ_UPESD_SDDATALOW_M \ + (0xffffffffULL << IRDMA_CQPSQ_UPESD_SDDATALOW_S) + +#define IRDMA_CQPSQ_UPESD_SDDATAHI_S 32 +#define IRDMA_CQPSQ_UPESD_SDDATAHI_M \ + (0xffffffffULL << IRDMA_CQPSQ_UPESD_SDDATAHI_S) +#define IRDMA_CQPSQ_UPESD_HMCFNID_S 0 +#define IRDMA_CQPSQ_UPESD_HMCFNID_M \ + (0x3fULL << IRDMA_CQPSQ_UPESD_HMCFNID_S) + +#define IRDMA_CQPSQ_UPESD_ENTRY_VALID_S 63 +#define IRDMA_CQPSQ_UPESD_ENTRY_VALID_M \ + BIT_ULL(IRDMA_CQPSQ_UPESD_ENTRY_VALID_S) + +#define IRDMA_CQPSQ_UPESD_ENTRY_COUNT_S 0 +#define IRDMA_CQPSQ_UPESD_ENTRY_COUNT_M \ + (0xfULL << IRDMA_CQPSQ_UPESD_ENTRY_COUNT_S) + +#define IRDMA_CQPSQ_UPESD_SKIP_ENTRY_S 7 +#define IRDMA_CQPSQ_UPESD_SKIP_ENTRY_M \ + BIT_ULL(IRDMA_CQPSQ_UPESD_SKIP_ENTRY_S) + +/* Suspend QP */ +#define IRDMA_CQPSQ_SUSPENDQP_QPID_S 0 +#define IRDMA_CQPSQ_SUSPENDQP_QPID_M (0x3FFFFULL) + +/* Resume QP */ +#define IRDMA_CQPSQ_RESUMEQP_QSHANDLE_S 0 +#define IRDMA_CQPSQ_RESUMEQP_QSHANDLE_M \ + (0xffffffffULL << IRDMA_CQPSQ_RESUMEQP_QSHANDLE_S) + +#define IRDMA_CQPSQ_RESUMEQP_QPID_S 0 +#define IRDMA_CQPSQ_RESUMEQP_QPID_M (0x3FFFFUL) + +/* IW QP Context */ +#define IRDMAQPC_DDP_VER_S 0 +#define IRDMAQPC_DDP_VER_M (3ULL << IRDMAQPC_DDP_VER_S) + +#define IRDMAQPC_IBRDENABLE_S 2 +#define IRDMAQPC_IBRDENABLE_M BIT_ULL(IRDMAQPC_IBRDENABLE_S) + +#define IRDMAQPC_IPV4_S 3 +#define IRDMAQPC_IPV4_M BIT_ULL(IRDMAQPC_IPV4_S) + +#define IRDMAQPC_NONAGLE_S 4 +#define IRDMAQPC_NONAGLE_M BIT_ULL(IRDMAQPC_NONAGLE_S) + +#define IRDMAQPC_INSERTVLANTAG_S 5 +#define IRDMAQPC_INSERTVLANTAG_M BIT_ULL(IRDMAQPC_INSERTVLANTAG_S) + +#define IRDMAQPC_ISQP1_S 6 +#define IRDMAQPC_ISQP1_M BIT_ULL(IRDMAQPC_ISQP1_S) + +#define IRDMAQPC_TIMESTAMP_S 7 +#define IRDMAQPC_TIMESTAMP_M BIT_ULL(IRDMAQPC_TIMESTAMP_S) + +#define IRDMAQPC_RQWQESIZE_S 8 +#define IRDMAQPC_RQWQESIZE_M (3ULL << IRDMAQPC_RQWQESIZE_S) + +#define IRDMAQPC_INSERTL2TAG2_S 11 +#define IRDMAQPC_INSERTL2TAG2_M BIT_ULL(IRDMAQPC_INSERTL2TAG2_S) + +#define IRDMAQPC_LIMIT_S 12 +#define IRDMAQPC_LIMIT_M (3ULL << IRDMAQPC_LIMIT_S) + +#define IRDMAQPC_ECN_EN_S 14 +#define IRDMAQPC_ECN_EN_M BIT_ULL(IRDMAQPC_ECN_EN_S) + +#define IRDMAQPC_DROPOOOSEG_S 15 +#define IRDMAQPC_DROPOOOSEG_M BIT_ULL(IRDMAQPC_DROPOOOSEG_S) + +#define IRDMAQPC_DUPACK_THRESH_S 16 +#define IRDMAQPC_DUPACK_THRESH_M (7ULL << IRDMAQPC_DUPACK_THRESH_S) + +#define IRDMAQPC_ERR_RQ_IDX_VALID_S 19 +#define IRDMAQPC_ERR_RQ_IDX_VALID_M BIT_ULL(IRDMAQPC_ERR_RQ_IDX_VALID_S) + +#define IRDMAQPC_DIS_VLAN_CHECKS_S 19 +#define IRDMAQPC_DIS_VLAN_CHECKS_M (7ULL << IRDMAQPC_DIS_VLAN_CHECKS_S) + +#define IRDMAQPC_DC_TCP_EN_S 25 +#define IRDMAQPC_DC_TCP_EN_M BIT_ULL(IRDMAQPC_DC_TCP_EN_S) + +#define IRDMAQPC_RCVTPHEN_S 28 +#define IRDMAQPC_RCVTPHEN_M BIT_ULL(IRDMAQPC_RCVTPHEN_S) + +#define IRDMAQPC_XMITTPHEN_S 29 +#define IRDMAQPC_XMITTPHEN_M BIT_ULL(IRDMAQPC_XMITTPHEN_S) + +#define IRDMAQPC_RQTPHEN_S 30 +#define IRDMAQPC_RQTPHEN_M BIT_ULL(IRDMAQPC_RQTPHEN_S) + +#define IRDMAQPC_SQTPHEN_S 31 +#define IRDMAQPC_SQTPHEN_M BIT_ULL(IRDMAQPC_SQTPHEN_S) + +#define IRDMAQPC_PPIDX_S 32 +#define IRDMAQPC_PPIDX_M (0x3ffULL << IRDMAQPC_PPIDX_S) + +#define IRDMAQPC_PMENA_S 47 +#define IRDMAQPC_PMENA_M BIT_ULL(IRDMAQPC_PMENA_S) + +#define IRDMAQPC_RDMAP_VER_S 62 +#define IRDMAQPC_RDMAP_VER_M (3ULL << IRDMAQPC_RDMAP_VER_S) + +#define IRDMAQPC_ROCE_TVER_S 60 +#define IRDMAQPC_ROCE_TVER_M (0x0fULL << IRDMAQPC_ROCE_TVER_S) +#define IRDMAQPC_SQADDR_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPC_SQADDR_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPC_RQADDR_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPC_RQADDR_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPC_TTL_S 0 +#define IRDMAQPC_TTL_M (0xffULL << IRDMAQPC_TTL_S) + +#define IRDMAQPC_RQSIZE_S 8 +#define IRDMAQPC_RQSIZE_M (0xfULL << IRDMAQPC_RQSIZE_S) + +#define IRDMAQPC_SQSIZE_S 12 +#define IRDMAQPC_SQSIZE_M (0xfULL << IRDMAQPC_SQSIZE_S) + +#define IRDMAQPC_GEN1_SRCMACADDRIDX_S 16 +#define IRDMAQPC_GEN1_SRCMACADDRIDX_M (0x3fUL << IRDMAQPC_GEN1_SRCMACADDRIDX_S) + +#define IRDMAQPC_AVOIDSTRETCHACK_S 23 +#define IRDMAQPC_AVOIDSTRETCHACK_M BIT_ULL(IRDMAQPC_AVOIDSTRETCHACK_S) + +#define IRDMAQPC_TOS_S 24 +#define IRDMAQPC_TOS_M (0xffULL << IRDMAQPC_TOS_S) + +#define IRDMAQPC_SRCPORTNUM_S 32 +#define IRDMAQPC_SRCPORTNUM_M (0xffffULL << IRDMAQPC_SRCPORTNUM_S) + +#define IRDMAQPC_DESTPORTNUM_S 48 +#define IRDMAQPC_DESTPORTNUM_M (0xffffULL << IRDMAQPC_DESTPORTNUM_S) + +#define IRDMAQPC_DESTIPADDR0_S 32 +#define IRDMAQPC_DESTIPADDR0_M \ + (0xffffffffULL << IRDMAQPC_DESTIPADDR0_S) + +#define IRDMAQPC_DESTIPADDR1_S 0 +#define IRDMAQPC_DESTIPADDR1_M \ + (0xffffffffULL << IRDMAQPC_DESTIPADDR1_S) + +#define IRDMAQPC_DESTIPADDR2_S 32 +#define IRDMAQPC_DESTIPADDR2_M \ + (0xffffffffULL << IRDMAQPC_DESTIPADDR2_S) + +#define IRDMAQPC_DESTIPADDR3_S 0 +#define IRDMAQPC_DESTIPADDR3_M \ + (0xffffffffULL << IRDMAQPC_DESTIPADDR3_S) + +#define IRDMAQPC_SNDMSS_S 16 +#define IRDMAQPC_SNDMSS_M (0x3fffULL << IRDMAQPC_SNDMSS_S) + +#define IRDMAQPC_SYN_RST_HANDLING_S 30 +#define IRDMAQPC_SYN_RST_HANDLING_M (0x3ULL << IRDMAQPC_SYN_RST_HANDLING_S) + +#define IRDMAQPC_VLANTAG_S 32 +#define IRDMAQPC_VLANTAG_M (0xffffULL << IRDMAQPC_VLANTAG_S) + +#define IRDMAQPC_ARPIDX_S 48 +#define IRDMAQPC_ARPIDX_M (0xffffULL << IRDMAQPC_ARPIDX_S) + +#define IRDMAQPC_FLOWLABEL_S 0 +#define IRDMAQPC_FLOWLABEL_M (0xfffffULL << IRDMAQPC_FLOWLABEL_S) + +#define IRDMAQPC_WSCALE_S 20 +#define IRDMAQPC_WSCALE_M BIT_ULL(IRDMAQPC_WSCALE_S) + +#define IRDMAQPC_KEEPALIVE_S 21 +#define IRDMAQPC_KEEPALIVE_M BIT_ULL(IRDMAQPC_KEEPALIVE_S) + +#define IRDMAQPC_IGNORE_TCP_OPT_S 22 +#define IRDMAQPC_IGNORE_TCP_OPT_M BIT_ULL(IRDMAQPC_IGNORE_TCP_OPT_S) + +#define IRDMAQPC_IGNORE_TCP_UNS_OPT_S 23 +#define IRDMAQPC_IGNORE_TCP_UNS_OPT_M \ + BIT_ULL(IRDMAQPC_IGNORE_TCP_UNS_OPT_S) + +#define IRDMAQPC_TCPSTATE_S 28 +#define IRDMAQPC_TCPSTATE_M (0xfULL << IRDMAQPC_TCPSTATE_S) + +#define IRDMAQPC_RCVSCALE_S 32 +#define IRDMAQPC_RCVSCALE_M (0xfULL << IRDMAQPC_RCVSCALE_S) + +#define IRDMAQPC_SNDSCALE_S 40 +#define IRDMAQPC_SNDSCALE_M (0xfULL << IRDMAQPC_SNDSCALE_S) + +#define IRDMAQPC_PDIDX_S 48 +#define IRDMAQPC_PDIDX_M (0xffffULL << IRDMAQPC_PDIDX_S) + +#define IRDMAQPC_PDIDXHI_S 20 +#define IRDMAQPC_PDIDXHI_M (0x3ULL << IRDMAQPC_PDIDXHI_S) + +#define IRDMAQPC_PKEY_S 32 +#define IRDMAQPC_PKEY_M (0xffffULL << IRDMAQPC_PKEY_S) + +#define IRDMAQPC_ACKCREDITS_S 20 +#define IRDMAQPC_ACKCREDITS_M (0x1fULL << IRDMAQPC_ACKCREDITS_S) + +#define IRDMAQPC_QKEY_S 32 +#define IRDMAQPC_QKEY_M (0xffffffffULL << IRDMAQPC_QKEY_S) + +#define IRDMAQPC_DESTQP_S 0 +#define IRDMAQPC_DESTQP_M (0xffffffULL << IRDMAQPC_DESTQP_S) + +#define IRDMAQPC_KALIVE_TIMER_MAX_PROBES_S 16 +#define IRDMAQPC_KALIVE_TIMER_MAX_PROBES_M \ + (0xffULL << IRDMAQPC_KALIVE_TIMER_MAX_PROBES_S) + +#define IRDMAQPC_KEEPALIVE_INTERVAL_S 24 +#define IRDMAQPC_KEEPALIVE_INTERVAL_M \ + (0xffULL << IRDMAQPC_KEEPALIVE_INTERVAL_S) + +#define IRDMAQPC_TIMESTAMP_RECENT_S 0 +#define IRDMAQPC_TIMESTAMP_RECENT_M \ + (0xffffffffULL << IRDMAQPC_TIMESTAMP_RECENT_S) + +#define IRDMAQPC_TIMESTAMP_AGE_S 32 +#define IRDMAQPC_TIMESTAMP_AGE_M \ + (0xffffffffULL << IRDMAQPC_TIMESTAMP_AGE_S) + +#define IRDMAQPC_SNDNXT_S 0 +#define IRDMAQPC_SNDNXT_M (0xffffffffULL << IRDMAQPC_SNDNXT_S) + +#define IRDMAQPC_ISN_S 32 +#define IRDMAQPC_ISN_M (0x00ffffffULL << IRDMAQPC_ISN_S) + +#define IRDMAQPC_PSNNXT_S 0 +#define IRDMAQPC_PSNNXT_M (0x00ffffffULL << IRDMAQPC_PSNNXT_S) + +#define IRDMAQPC_LSN_S 32 +#define IRDMAQPC_LSN_M (0x00ffffffULL << IRDMAQPC_LSN_S) + +#define IRDMAQPC_SNDWND_S 32 +#define IRDMAQPC_SNDWND_M (0xffffffffULL << IRDMAQPC_SNDWND_S) + +#define IRDMAQPC_RCVNXT_S 0 +#define IRDMAQPC_RCVNXT_M (0xffffffffULL << IRDMAQPC_RCVNXT_S) + +#define IRDMAQPC_EPSN_S 0 +#define IRDMAQPC_EPSN_M (0x00ffffffULL << IRDMAQPC_EPSN_S) + +#define IRDMAQPC_RCVWND_S 32 +#define IRDMAQPC_RCVWND_M (0xffffffffULL << IRDMAQPC_RCVWND_S) + +#define IRDMAQPC_SNDMAX_S 0 +#define IRDMAQPC_SNDMAX_M (0xffffffffULL << IRDMAQPC_SNDMAX_S) + +#define IRDMAQPC_SNDUNA_S 32 +#define IRDMAQPC_SNDUNA_M (0xffffffffULL << IRDMAQPC_SNDUNA_S) + +#define IRDMAQPC_PSNMAX_S 0 +#define IRDMAQPC_PSNMAX_M (0x00ffffffULL << IRDMAQPC_PSNMAX_S) +#define IRDMAQPC_PSNUNA_S 32 +#define IRDMAQPC_PSNUNA_M (0xffffffULL << IRDMAQPC_PSNUNA_S) + +#define IRDMAQPC_SRTT_S 0 +#define IRDMAQPC_SRTT_M (0xffffffffULL << IRDMAQPC_SRTT_S) + +#define IRDMAQPC_RTTVAR_S 32 +#define IRDMAQPC_RTTVAR_M (0xffffffffULL << IRDMAQPC_RTTVAR_S) + +#define IRDMAQPC_SSTHRESH_S 0 +#define IRDMAQPC_SSTHRESH_M (0xffffffffULL << IRDMAQPC_SSTHRESH_S) + +#define IRDMAQPC_CWND_S 32 +#define IRDMAQPC_CWND_M (0xffffffffULL << IRDMAQPC_CWND_S) + +#define IRDMAQPC_CWNDROCE_S 32 +#define IRDMAQPC_CWNDROCE_M (0xffffffULL << IRDMAQPC_CWNDROCE_S) +#define IRDMAQPC_SNDWL1_S 0 +#define IRDMAQPC_SNDWL1_M (0xffffffffULL << IRDMAQPC_SNDWL1_S) + +#define IRDMAQPC_SNDWL2_S 32 +#define IRDMAQPC_SNDWL2_M (0xffffffffULL << IRDMAQPC_SNDWL2_S) + +#define IRDMAQPC_ERR_RQ_IDX_S 32 +#define IRDMAQPC_ERR_RQ_IDX_M (0x3fffULL << IRDMAQPC_ERR_RQ_IDX_S) + +#define IRDMAQPC_MAXSNDWND_S 0 +#define IRDMAQPC_MAXSNDWND_M (0xffffffffULL << IRDMAQPC_MAXSNDWND_S) + +#define IRDMAQPC_REXMIT_THRESH_S 48 +#define IRDMAQPC_REXMIT_THRESH_M (0x3fULL << IRDMAQPC_REXMIT_THRESH_S) + +#define IRDMAQPC_RNRNAK_THRESH_S 54 +#define IRDMAQPC_RNRNAK_THRESH_M (0x7ULL << IRDMAQPC_RNRNAK_THRESH_S) + +#define IRDMAQPC_TXCQNUM_S 0 +#define IRDMAQPC_TXCQNUM_M (0x7ffffULL << IRDMAQPC_TXCQNUM_S) + +#define IRDMAQPC_RXCQNUM_S 32 +#define IRDMAQPC_RXCQNUM_M (0x7ffffULL << IRDMAQPC_RXCQNUM_S) + +#define IRDMAQPC_STAT_INDEX_S 0 +#define IRDMAQPC_STAT_INDEX_M (0x7fULL << IRDMAQPC_STAT_INDEX_S) + +#define IRDMAQPC_Q2ADDR_S 8 +#define IRDMAQPC_Q2ADDR_M (0xffffffffffffffULL << IRDMAQPC_Q2ADDR_S) + +#define IRDMAQPC_LASTBYTESENT_S 0 +#define IRDMAQPC_LASTBYTESENT_M (0xffULL << IRDMAQPC_LASTBYTESENT_S) + +#define IRDMAQPC_MACADDRESS_S 16 +#define IRDMAQPC_MACADDRESS_M (0xffffffffffffULL << IRDMAQPC_MACADDRESS_S) + +#define IRDMAQPC_ORDSIZE_S 0 +#define IRDMAQPC_ORDSIZE_M (0xffULL << IRDMAQPC_ORDSIZE_S) + +#define IRDMAQPC_IRDSIZE_S 16 +#define IRDMAQPC_IRDSIZE_M (0x7ULL << IRDMAQPC_IRDSIZE_S) + +#define IRDMAQPC_UDPRIVCQENABLE_S 19 +#define IRDMAQPC_UDPRIVCQENABLE_M BIT_ULL(IRDMAQPC_UDPRIVCQENABLE_S) + +#define IRDMAQPC_WRRDRSPOK_S 20 +#define IRDMAQPC_WRRDRSPOK_M BIT_ULL(IRDMAQPC_WRRDRSPOK_S) + +#define IRDMAQPC_RDOK_S 21 +#define IRDMAQPC_RDOK_M BIT_ULL(IRDMAQPC_RDOK_S) + +#define IRDMAQPC_SNDMARKERS_S 22 +#define IRDMAQPC_SNDMARKERS_M BIT_ULL(IRDMAQPC_SNDMARKERS_S) + +#define IRDMAQPC_DCQCNENABLE_S 22 +#define IRDMAQPC_DCQCNENABLE_M BIT_ULL(IRDMAQPC_DCQCNENABLE_S) + +#define IRDMAQPC_FW_CC_ENABLE_S 28 +#define IRDMAQPC_FW_CC_ENABLE_M BIT_ULL(IRDMAQPC_FW_CC_ENABLE_S) + +#define IRDMAQPC_RCVNOICRC_S 31 +#define IRDMAQPC_RCVNOICRC_M BIT_ULL(IRDMAQPC_RCVNOICRC_S) + +#define IRDMAQPC_BINDEN_S 23 +#define IRDMAQPC_BINDEN_M BIT_ULL(IRDMAQPC_BINDEN_S) + +#define IRDMAQPC_FASTREGEN_S 24 +#define IRDMAQPC_FASTREGEN_M BIT_ULL(IRDMAQPC_FASTREGEN_S) + +#define IRDMAQPC_PRIVEN_S 25 +#define IRDMAQPC_PRIVEN_M BIT_ULL(IRDMAQPC_PRIVEN_S) + +#define IRDMAQPC_TIMELYENABLE_S 27 +#define IRDMAQPC_TIMELYENABLE_M BIT_ULL(IRDMAQPC_TIMELYENABLE_S) + +#define IRDMAQPC_THIGH_S 52 +#define IRDMAQPC_THIGH_M ((u64)0xfff << IRDMAQPC_THIGH_S) + +#define IRDMAQPC_TLOW_S 32 +#define IRDMAQPC_TLOW_M ((u64)0xFF << IRDMAQPC_TLOW_S) + +#define IRDMAQPC_REMENDPOINTIDX_S 0 +#define IRDMAQPC_REMENDPOINTIDX_M ((u64)0x1FFFF << IRDMAQPC_REMENDPOINTIDX_S) + +#define IRDMAQPC_USESTATSINSTANCE_S 26 +#define IRDMAQPC_USESTATSINSTANCE_M BIT_ULL(IRDMAQPC_USESTATSINSTANCE_S) + +#define IRDMAQPC_IWARPMODE_S 28 +#define IRDMAQPC_IWARPMODE_M BIT_ULL(IRDMAQPC_IWARPMODE_S) + +#define IRDMAQPC_RCVMARKERS_S 29 +#define IRDMAQPC_RCVMARKERS_M BIT_ULL(IRDMAQPC_RCVMARKERS_S) + +#define IRDMAQPC_ALIGNHDRS_S 30 +#define IRDMAQPC_ALIGNHDRS_M BIT_ULL(IRDMAQPC_ALIGNHDRS_S) + +#define IRDMAQPC_RCVNOMPACRC_S 31 +#define IRDMAQPC_RCVNOMPACRC_M BIT_ULL(IRDMAQPC_RCVNOMPACRC_S) + +#define IRDMAQPC_RCVMARKOFFSET_S 32 +#define IRDMAQPC_RCVMARKOFFSET_M (0x1ffULL << IRDMAQPC_RCVMARKOFFSET_S) + +#define IRDMAQPC_SNDMARKOFFSET_S 48 +#define IRDMAQPC_SNDMARKOFFSET_M (0x1ffULL << IRDMAQPC_SNDMARKOFFSET_S) + +#define IRDMAQPC_QPCOMPCTX_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPC_QPCOMPCTX_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPC_SQTPHVAL_S 0 +#define IRDMAQPC_SQTPHVAL_M (0xffULL << IRDMAQPC_SQTPHVAL_S) + +#define IRDMAQPC_RQTPHVAL_S 8 +#define IRDMAQPC_RQTPHVAL_M (0xffULL << IRDMAQPC_RQTPHVAL_S) + +#define IRDMAQPC_QSHANDLE_S 16 +#define IRDMAQPC_QSHANDLE_M (0x3ffULL << IRDMAQPC_QSHANDLE_S) + +#define IRDMAQPC_EXCEPTION_LAN_QUEUE_S 32 +#define IRDMAQPC_EXCEPTION_LAN_QUEUE_M \ + (0xfffULL << IRDMAQPC_EXCEPTION_LAN_QUEUE_S) + +#define IRDMAQPC_LOCAL_IPADDR3_S 0 +#define IRDMAQPC_LOCAL_IPADDR3_M \ + (0xffffffffULL << IRDMAQPC_LOCAL_IPADDR3_S) + +#define IRDMAQPC_LOCAL_IPADDR2_S 32 +#define IRDMAQPC_LOCAL_IPADDR2_M \ + (0xffffffffULL << IRDMAQPC_LOCAL_IPADDR2_S) + +#define IRDMAQPC_LOCAL_IPADDR1_S 0 +#define IRDMAQPC_LOCAL_IPADDR1_M \ + (0xffffffffULL << IRDMAQPC_LOCAL_IPADDR1_S) + +#define IRDMAQPC_LOCAL_IPADDR0_S 32 +#define IRDMAQPC_LOCAL_IPADDR0_M \ + (0xffffffffULL << IRDMAQPC_LOCAL_IPADDR0_S) + +#define IRDMA_FW_VER_MINOR_S 0 +#define IRDMA_FW_VER_MINOR_M \ + (0xffffULL << IRDMA_FW_VER_MINOR_S) + +#define IRDMA_FW_VER_MAJOR_S 16 +#define IRDMA_FW_VER_MAJOR_M \ + (0xffffULL << IRDMA_FW_VER_MAJOR_S) + +#define IRDMA_FEATURE_INFO_S 0 +#define IRDMA_FEATURE_INFO_M \ + (0xffffffffffffULL << IRDMA_FEATURE_INFO_S) + +#define IRDMA_FEATURE_CNT_S 32 +#define IRDMA_FEATURE_CNT_M \ + (0xffffULL << IRDMA_FEATURE_CNT_S) + +#define IRDMA_FEATURE_TYPE_S 48 +#define IRDMA_FEATURE_TYPE_M \ + (0xffffULL << IRDMA_FEATURE_TYPE_S) + +#define IRDMA_RSVD_S 41 +#define IRDMA_RSVD_M (0x7fffULL << IRDMA_RSVD_S) + +/* iwarp QP SQ WQE common fields */ +#define IRDMAQPSQ_OPCODE_S 32 +#define IRDMAQPSQ_OPCODE_M (0x3fULL << IRDMAQPSQ_OPCODE_S) + +#define IRDMAQPSQ_COPY_HOST_PBL_S 43 +#define IRDMAQPSQ_COPY_HOST_PBL_M BIT_ULL(IRDMAQPSQ_COPY_HOST_PBL_S) + +#define IRDMAQPSQ_ADDFRAGCNT_S 38 +#define IRDMAQPSQ_ADDFRAGCNT_M (0xfULL << IRDMAQPSQ_ADDFRAGCNT_S) + +#define IRDMAQPSQ_PUSHWQE_S 56 +#define IRDMAQPSQ_PUSHWQE_M BIT_ULL(IRDMAQPSQ_PUSHWQE_S) + +#define IRDMAQPSQ_STREAMMODE_S 58 +#define IRDMAQPSQ_STREAMMODE_M BIT_ULL(IRDMAQPSQ_STREAMMODE_S) + +#define IRDMAQPSQ_WAITFORRCVPDU_S 59 +#define IRDMAQPSQ_WAITFORRCVPDU_M BIT_ULL(IRDMAQPSQ_WAITFORRCVPDU_S) + +#define IRDMAQPSQ_READFENCE_S 60 +#define IRDMAQPSQ_READFENCE_M BIT_ULL(IRDMAQPSQ_READFENCE_S) + +#define IRDMAQPSQ_LOCALFENCE_S 61 +#define IRDMAQPSQ_LOCALFENCE_M BIT_ULL(IRDMAQPSQ_LOCALFENCE_S) + +#define IRDMAQPSQ_UDPHEADER_S 61 +#define IRDMAQPSQ_UDPHEADER_M BIT_ULL(IRDMAQPSQ_UDPHEADER_S) + +#define IRDMAQPSQ_L4LEN_S 42 +#define IRDMAQPSQ_L4LEN_M ((u64)0xF << IRDMAQPSQ_L4LEN_S) + +#define IRDMAQPSQ_SIGCOMPL_S 62 +#define IRDMAQPSQ_SIGCOMPL_M BIT_ULL(IRDMAQPSQ_SIGCOMPL_S) + +#define IRDMAQPSQ_VALID_S 63 +#define IRDMAQPSQ_VALID_M BIT_ULL(IRDMAQPSQ_VALID_S) + +#define IRDMAQPSQ_FRAG_TO_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPSQ_FRAG_TO_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPSQ_FRAG_VALID_S 63 +#define IRDMAQPSQ_FRAG_VALID_M BIT_ULL(IRDMAQPSQ_FRAG_VALID_S) + +#define IRDMAQPSQ_FRAG_LEN_S 32 +#define IRDMAQPSQ_FRAG_LEN_M (0x7fffffffULL << IRDMAQPSQ_FRAG_LEN_S) + +#define IRDMAQPSQ_FRAG_STAG_S 0 +#define IRDMAQPSQ_FRAG_STAG_M (0xffffffffULL << IRDMAQPSQ_FRAG_STAG_S) + +#define IRDMAQPSQ_GEN1_FRAG_LEN_S 0 +#define IRDMAQPSQ_GEN1_FRAG_LEN_M (0xffffffffULL << IRDMAQPSQ_GEN1_FRAG_LEN_S) + +#define IRDMAQPSQ_GEN1_FRAG_STAG_S 32 +#define IRDMAQPSQ_GEN1_FRAG_STAG_M (0xffffffffULL << IRDMAQPSQ_GEN1_FRAG_STAG_S) + +#define IRDMAQPSQ_REMSTAGINV_S 0 +#define IRDMAQPSQ_REMSTAGINV_M (0xffffffffULL << IRDMAQPSQ_REMSTAGINV_S) + +#define IRDMAQPSQ_DESTQKEY_S 0 +#define IRDMAQPSQ_DESTQKEY_M (0xffffffffULL << IRDMAQPSQ_DESTQKEY_S) + +#define IRDMAQPSQ_DESTQPN_S 32 +#define IRDMAQPSQ_DESTQPN_M (0x00ffffffULL << IRDMAQPSQ_DESTQPN_S) + +#define IRDMAQPSQ_AHID_S 0 +#define IRDMAQPSQ_AHID_M (0x0001ffffULL << IRDMAQPSQ_AHID_S) + +#define IRDMAQPSQ_INLINEDATAFLAG_S 57 +#define IRDMAQPSQ_INLINEDATAFLAG_M BIT_ULL(IRDMAQPSQ_INLINEDATAFLAG_S) + +#define IRDMA_INLINE_VALID_S 7 + +#define IRDMAQPSQ_INLINEDATALEN_S 48 +#define IRDMAQPSQ_INLINEDATALEN_M \ + (0xffULL << IRDMAQPSQ_INLINEDATALEN_S) +#define IRDMAQPSQ_IMMDATAFLAG_S 47 +#define IRDMAQPSQ_IMMDATAFLAG_M \ + BIT_ULL(IRDMAQPSQ_IMMDATAFLAG_S) +#define IRDMAQPSQ_REPORTRTT_S 46 +#define IRDMAQPSQ_REPORTRTT_M \ + BIT_ULL(IRDMAQPSQ_REPORTRTT_S) + +#define IRDMAQPSQ_IMMDATA_S 0 +#define IRDMAQPSQ_IMMDATA_M \ + (0xffffffffffffffffULL << IRDMAQPSQ_IMMDATA_S) + +/* rdma write */ +#define IRDMAQPSQ_REMSTAG_S 0 +#define IRDMAQPSQ_REMSTAG_M (0xffffffffULL << IRDMAQPSQ_REMSTAG_S) + +#define IRDMAQPSQ_REMTO_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPSQ_REMTO_M IRDMA_CQPHC_QPCTX_M + +/* memory window */ +#define IRDMAQPSQ_STAGRIGHTS_S 48 +#define IRDMAQPSQ_STAGRIGHTS_M (0x1fULL << IRDMAQPSQ_STAGRIGHTS_S) + +#define IRDMAQPSQ_VABASEDTO_S 53 +#define IRDMAQPSQ_VABASEDTO_M BIT_ULL(IRDMAQPSQ_VABASEDTO_S) + +#define IRDMAQPSQ_MEMWINDOWTYPE_S 54 +#define IRDMAQPSQ_MEMWINDOWTYPE_M BIT_ULL(IRDMAQPSQ_MEMWINDOWTYPE_S) + +#define IRDMAQPSQ_MWLEN_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPSQ_MWLEN_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPSQ_PARENTMRSTAG_S 32 +#define IRDMAQPSQ_PARENTMRSTAG_M \ + (0xffffffffULL << IRDMAQPSQ_PARENTMRSTAG_S) + +#define IRDMAQPSQ_MWSTAG_S 0 +#define IRDMAQPSQ_MWSTAG_M (0xffffffffULL << IRDMAQPSQ_MWSTAG_S) + +#define IRDMAQPSQ_BASEVA_TO_FBO_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPSQ_BASEVA_TO_FBO_M IRDMA_CQPHC_QPCTX_M + +/* Local Invalidate */ +#define IRDMAQPSQ_LOCSTAG_S 0 +#define IRDMAQPSQ_LOCSTAG_M (0xffffffffULL << IRDMAQPSQ_LOCSTAG_S) + +/* Fast Register */ +#define IRDMAQPSQ_STAGKEY_S 0 +#define IRDMAQPSQ_STAGKEY_M (0xffULL << IRDMAQPSQ_STAGKEY_S) + +#define IRDMAQPSQ_STAGINDEX_S 8 +#define IRDMAQPSQ_STAGINDEX_M (0xffffffULL << IRDMAQPSQ_STAGINDEX_S) + +#define IRDMAQPSQ_COPYHOSTPBLS_S 43 +#define IRDMAQPSQ_COPYHOSTPBLS_M BIT_ULL(IRDMAQPSQ_COPYHOSTPBLS_S) + +#define IRDMAQPSQ_LPBLSIZE_S 44 +#define IRDMAQPSQ_LPBLSIZE_M (3ULL << IRDMAQPSQ_LPBLSIZE_S) + +#define IRDMAQPSQ_HPAGESIZE_S 46 +#define IRDMAQPSQ_HPAGESIZE_M (3ULL << IRDMAQPSQ_HPAGESIZE_S) + +#define IRDMAQPSQ_STAGLEN_S 0 +#define IRDMAQPSQ_STAGLEN_M (0x1ffffffffffULL << IRDMAQPSQ_STAGLEN_S) + +#define IRDMAQPSQ_FIRSTPMPBLIDXLO_S 48 +#define IRDMAQPSQ_FIRSTPMPBLIDXLO_M \ + (0xffffULL << IRDMAQPSQ_FIRSTPMPBLIDXLO_S) + +#define IRDMAQPSQ_FIRSTPMPBLIDXHI_S 0 +#define IRDMAQPSQ_FIRSTPMPBLIDXHI_M \ + (0xfffULL << IRDMAQPSQ_FIRSTPMPBLIDXHI_S) + +#define IRDMAQPSQ_PBLADDR_S 12 +#define IRDMAQPSQ_PBLADDR_M (0xfffffffffffffULL << IRDMAQPSQ_PBLADDR_S) + +/* iwarp QP RQ WQE common fields */ +#define IRDMAQPRQ_ADDFRAGCNT_S IRDMAQPSQ_ADDFRAGCNT_S +#define IRDMAQPRQ_ADDFRAGCNT_M IRDMAQPSQ_ADDFRAGCNT_M + +#define IRDMAQPRQ_VALID_S IRDMAQPSQ_VALID_S +#define IRDMAQPRQ_VALID_M IRDMAQPSQ_VALID_M + +#define IRDMAQPRQ_COMPLCTX_S IRDMA_CQPHC_QPCTX_S +#define IRDMAQPRQ_COMPLCTX_M IRDMA_CQPHC_QPCTX_M + +#define IRDMAQPRQ_FRAG_LEN_S IRDMAQPSQ_FRAG_LEN_S +#define IRDMAQPRQ_FRAG_LEN_M IRDMAQPSQ_FRAG_LEN_M + +#define IRDMAQPRQ_STAG_S IRDMAQPSQ_FRAG_STAG_S +#define IRDMAQPRQ_STAG_M IRDMAQPSQ_FRAG_STAG_M + +#define IRDMAQPRQ_TO_S IRDMAQPSQ_FRAG_TO_S +#define IRDMAQPRQ_TO_M IRDMAQPSQ_FRAG_TO_M + +/* Query FPM CQP buf */ +#define IRDMA_QUERY_FPM_MAX_QPS_S 0 +#define IRDMA_QUERY_FPM_MAX_QPS_M \ + (0x7ffffULL << IRDMA_QUERY_FPM_MAX_QPS_S) + +#define IRDMA_QUERY_FPM_MAX_CQS_S 0 +#define IRDMA_QUERY_FPM_MAX_CQS_M \ + (0xfffffULL << IRDMA_QUERY_FPM_MAX_CQS_S) + +#define IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX_S 0 +#define IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX_M \ + (0x3fffULL << IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX_S) + +#define IRDMA_QUERY_FPM_MAX_PE_SDS_S 32 +#define IRDMA_QUERY_FPM_MAX_PE_SDS_M \ + (0x3fffULL << IRDMA_QUERY_FPM_MAX_PE_SDS_S) + +#define IRDMA_QUERY_FPM_MAX_CEQS_S 0 +#define IRDMA_QUERY_FPM_MAX_CEQS_M \ + (0x3ffULL << IRDMA_QUERY_FPM_MAX_CEQS_S) + +#define IRDMA_QUERY_FPM_XFBLOCKSIZE_S 32 +#define IRDMA_QUERY_FPM_XFBLOCKSIZE_M \ + (0xffffffffULL << IRDMA_QUERY_FPM_XFBLOCKSIZE_S) + +#define IRDMA_QUERY_FPM_Q1BLOCKSIZE_S 32 +#define IRDMA_QUERY_FPM_Q1BLOCKSIZE_M \ + (0xffffffffULL << IRDMA_QUERY_FPM_Q1BLOCKSIZE_S) + +#define IRDMA_QUERY_FPM_HTMULTIPLIER_S 16 +#define IRDMA_QUERY_FPM_HTMULTIPLIER_M \ + (0xfULL << IRDMA_QUERY_FPM_HTMULTIPLIER_S) + +#define IRDMA_QUERY_FPM_TIMERBUCKET_S 32 +#define IRDMA_QUERY_FPM_TIMERBUCKET_M \ + (0xffFFULL << IRDMA_QUERY_FPM_TIMERBUCKET_S) + +#define IRDMA_QUERY_FPM_RRFBLOCKSIZE_S 32 +#define IRDMA_QUERY_FPM_RRFBLOCKSIZE_M \ + (0xffffffffULL << IRDMA_QUERY_FPM_RRFBLOCKSIZE_S) + +#define IRDMA_QUERY_FPM_RRFFLBLOCKSIZE_S 32 +#define IRDMA_QUERY_FPM_RRFFLBLOCKSIZE_M \ + (0xffffffffULL << IRDMA_QUERY_FPM_RRFFLBLOCKSIZE_S) + +#define IRDMA_QUERY_FPM_OOISCFBLOCKSIZE_S 32 +#define IRDMA_QUERY_FPM_OOISCFBLOCKSIZE_M \ + (0xffffffffULL << IRDMA_QUERY_FPM_OOISCFBLOCKSIZE_S) + +/* Static HMC pages allocated buf */ +#define IRDMA_SHMC_PAGE_ALLOCATED_HMC_FN_ID_S 0 +#define IRDMA_SHMC_PAGE_ALLOCATED_HMC_FN_ID_M \ + (0x3fULL << IRDMA_SHMC_PAGE_ALLOCATED_HMC_FN_ID_S) + +#define IRDMA_GET_CURRENT_AEQ_ELEM(_aeq) \ + ( \ + (_aeq)->aeqe_base[IRDMA_RING_CURRENT_TAIL((_aeq)->aeq_ring)].buf \ + ) + +#define IRDMA_GET_CURRENT_CEQ_ELEM(_ceq) \ + ( \ + (_ceq)->ceqe_base[IRDMA_RING_CURRENT_TAIL((_ceq)->ceq_ring)].buf \ + ) + +#define IRDMA_CQP_INIT_WQE(wqe) memset(wqe, 0, 64) + +#define IRDMA_GET_CURRENT_CQ_ELEM(_cq) \ + ( \ + (_cq)->cq_base[IRDMA_RING_CURRENT_HEAD((_cq)->cq_ring)].buf \ + ) +#define IRDMA_GET_CURRENT_EXTENDED_CQ_ELEM(_cq) \ + ( \ + ((struct irdma_extended_cqe *) \ + ((_cq)->cq_base))[IRDMA_RING_CURRENT_HEAD((_cq)->cq_ring)].buf \ + ) + +#define IRDMA_RING_INIT(_ring, _size) \ + { \ + (_ring).head = 0; \ + (_ring).tail = 0; \ + (_ring).size = (_size); \ + } +#define IRDMA_RING_SIZE(_ring) ((_ring).size) +#define IRDMA_RING_CURRENT_HEAD(_ring) ((_ring).head) +#define IRDMA_RING_CURRENT_TAIL(_ring) ((_ring).tail) + +#define IRDMA_RING_MOVE_HEAD(_ring, _retcode) \ + { \ + register u32 size; \ + size = (_ring).size; \ + if (!IRDMA_RING_FULL_ERR(_ring)) { \ + (_ring).head = ((_ring).head + 1) % size; \ + (_retcode) = 0; \ + } else { \ + (_retcode) = IRDMA_ERR_RING_FULL; \ + } \ + } +#define IRDMA_RING_MOVE_HEAD_BY_COUNT(_ring, _count, _retcode) \ + { \ + register u32 size; \ + size = (_ring).size; \ + if ((IRDMA_RING_USED_QUANTA(_ring) + (_count)) < size) { \ + (_ring).head = ((_ring).head + (_count)) % size; \ + (_retcode) = 0; \ + } else { \ + (_retcode) = IRDMA_ERR_RING_FULL; \ + } \ + } +#define IRDMA_SQ_RING_MOVE_HEAD(_ring, _retcode) \ + { \ + register u32 size; \ + size = (_ring).size; \ + if (!IRDMA_SQ_RING_FULL_ERR(_ring)) { \ + (_ring).head = ((_ring).head + 1) % size; \ + (_retcode) = 0; \ + } else { \ + (_retcode) = IRDMA_ERR_RING_FULL; \ + } \ + } +#define IRDMA_SQ_RING_MOVE_HEAD_BY_COUNT(_ring, _count, _retcode) \ + { \ + register u32 size; \ + size = (_ring).size; \ + if ((IRDMA_RING_USED_QUANTA(_ring) + (_count)) < (size - 256)) { \ + (_ring).head = ((_ring).head + (_count)) % size; \ + (_retcode) = 0; \ + } else { \ + (_retcode) = IRDMA_ERR_RING_FULL; \ + } \ + } +#define IRDMA_RING_MOVE_HEAD_BY_COUNT_NOCHECK(_ring, _count) \ + (_ring).head = ((_ring).head + (_count)) % (_ring).size + +#define IRDMA_RING_MOVE_TAIL(_ring) \ + (_ring).tail = ((_ring).tail + 1) % (_ring).size + +#define IRDMA_RING_MOVE_HEAD_NOCHECK(_ring) \ + (_ring).head = ((_ring).head + 1) % (_ring).size + +#define IRDMA_RING_MOVE_TAIL_BY_COUNT(_ring, _count) \ + (_ring).tail = ((_ring).tail + (_count)) % (_ring).size + +#define IRDMA_RING_SET_TAIL(_ring, _pos) \ + (_ring).tail = (_pos) % (_ring).size + +#define IRDMA_RING_FULL_ERR(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 1)) \ + ) + +#define IRDMA_ERR_RING_FULL2(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 2)) \ + ) + +#define IRDMA_ERR_RING_FULL3(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 3)) \ + ) + +#define IRDMA_SQ_RING_FULL_ERR(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 257)) \ + ) + +#define IRDMA_ERR_SQ_RING_FULL2(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 258)) \ + ) +#define IRDMA_ERR_SQ_RING_FULL3(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) == ((_ring).size - 259)) \ + ) +#define IRDMA_RING_MORE_WORK(_ring) \ + ( \ + (IRDMA_RING_USED_QUANTA(_ring) != 0) \ + ) + +#define IRDMA_RING_USED_QUANTA(_ring) \ + ( \ + (((_ring).head + (_ring).size - (_ring).tail) % (_ring).size) \ + ) + +#define IRDMA_RING_FREE_QUANTA(_ring) \ + ( \ + ((_ring).size - IRDMA_RING_USED_QUANTA(_ring) - 1) \ + ) + +#define IRDMA_SQ_RING_FREE_QUANTA(_ring) \ + ( \ + ((_ring).size - IRDMA_RING_USED_QUANTA(_ring) - 257) \ + ) + +#define IRDMA_ATOMIC_RING_MOVE_HEAD(_ring, index, _retcode) \ + { \ + index = IRDMA_RING_CURRENT_HEAD(_ring); \ + IRDMA_RING_MOVE_HEAD(_ring, _retcode); \ + } + +enum irdma_qp_wqe_size { + IRDMA_WQE_SIZE_32 = 32, + IRDMA_WQE_SIZE_64 = 64, + IRDMA_WQE_SIZE_96 = 96, + IRDMA_WQE_SIZE_128 = 128, + IRDMA_WQE_SIZE_256 = 256, +}; + +enum irdma_ws_node_op { + IRDMA_ADD_NODE = 0, + IRDMA_MODIFY_NODE, + IRDMA_DEL_NODE, + IRDMA_FAILOVER_START, + IRDMA_FAILOVER_COMPLETE, +}; + +enum { IRDMA_Q_ALIGNMENT_M = (128 - 1), + IRDMA_AEQ_ALIGNMENT_M = (256 - 1), + IRDMA_Q2_ALIGNMENT_M = (256 - 1), + IRDMA_CEQ_ALIGNMENT_M = (256 - 1), + IRDMA_CQ0_ALIGNMENT_M = (256 - 1), + IRDMA_HOST_CTX_ALIGNMENT_M = (4 - 1), + IRDMA_SHADOWAREA_M = (128 - 1), + IRDMA_FPM_QUERY_BUF_ALIGNMENT_M = (4 - 1), + IRDMA_FPM_COMMIT_BUF_ALIGNMENT_M = (4 - 1), +}; + +enum irdma_alignment { + IRDMA_CQP_ALIGNMENT = 0x200, + IRDMA_AEQ_ALIGNMENT = 0x100, + IRDMA_CEQ_ALIGNMENT = 0x100, + IRDMA_CQ0_ALIGNMENT = 0x100, + IRDMA_SD_BUF_ALIGNMENT = 0x80, + IRDMA_FEATURE_BUF_ALIGNMENT = 0x8, +}; + +enum icrdma_protocol_used { + ICRDMA_ANY_PROTOCOL = 0, + ICRDMA_IWARP_PROTOCOL_ONLY = 1, + ICRDMA_ROCE_PROTOCOL_ONLY = 2, +}; + +/** + * set_64bit_val - set 64 bit value to hw wqe + * @wqe_words: wqe addr to write + * @byte_index: index in wqe + * @val: value to write + **/ +static inline void set_64bit_val(__le64 *wqe_words, u32 byte_index, u64 val) +{ + wqe_words[byte_index >> 3] = cpu_to_le64(val); +} + +/** + * set_32bit_val - set 32 bit value to hw wqe + * @wqe_words: wqe addr to write + * @byte_index: index in wqe + * @val: value to write + **/ +static inline void set_32bit_val(u32 *wqe_words, u32 byte_index, u32 val) +{ + wqe_words[byte_index >> 2] = val; +} + +/** + * get_64bit_val - read 64 bit value from wqe + * @wqe_words: wqe addr + * @byte_index: index to read from + * @val: read value + **/ +static inline void get_64bit_val(__le64 *wqe_words, u32 byte_index, u64 *val) +{ + *val = le64_to_cpu(wqe_words[byte_index >> 3]); +} + +/** + * get_32bit_val - read 32 bit value from wqe + * @wqe_words: wqe addr + * @byte_index: index to reaad from + * @val: return 32 bit value + **/ +static inline void get_32bit_val(u32 *wqe_words, u32 byte_index, u32 *val) +{ + *val = wqe_words[byte_index >> 2]; +} +#endif /* IRDMA_DEFS_H */ diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h new file mode 100644 index 000000000000..31da13d411b6 --- /dev/null +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#ifndef IRDMA_H +#define IRDMA_H + +#define IRDMA_WQEALLOC_WQE_DESC_INDEX_S 20 +#define IRDMA_WQEALLOC_WQE_DESC_INDEX_M (0xfff << IRDMA_WQEALLOC_WQE_DESC_INDEX_S) + +#define IRDMA_CQPTAIL_WQTAIL_S 0 +#define IRDMA_CQPTAIL_WQTAIL_M (0x7ff << IRDMA_CQPTAIL_WQTAIL_S) + +#define IRDMA_CQPTAIL_CQP_OP_ERR_S 31 +#define IRDMA_CQPTAIL_CQP_OP_ERR_M (0x1 << IRDMA_CQPTAIL_CQP_OP_ERR_S) + +#define IRDMA_CQPERRCODES_CQP_MINOR_CODE_S 0 +#define IRDMA_CQPERRCODES_CQP_MINOR_CODE_M (0xffff << IRDMA_CQPERRCODES_CQP_MINOR_CODE_S) +#define IRDMA_CQPERRCODES_CQP_MAJOR_CODE_S 16 +#define IRDMA_CQPERRCODES_CQP_MAJOR_CODE_M (0xffff << IRDMA_CQPERRCODES_CQP_MAJOR_CODE_S) + +#define IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE_S 4 +#define IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE_M (0x3 << IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE_S) + +#define IRDMA_GLINT_DYN_CTL_INTENA_S 0 +#define IRDMA_GLINT_DYN_CTL_INTENA_M (0x1 << IRDMA_GLINT_DYN_CTL_INTENA_S) + +#define IRDMA_GLINT_DYN_CTL_CLEARPBA_S 1 +#define IRDMA_GLINT_DYN_CTL_CLEARPBA_M (0x1 << IRDMA_GLINT_DYN_CTL_CLEARPBA_S) + +#define IRDMA_GLINT_DYN_CTL_ITR_INDX_S 3 +#define IRDMA_GLINT_DYN_CTL_ITR_INDX_M (0x3 << IRDMA_GLINT_DYN_CTL_ITR_INDX_S) + +#define IRDMA_GLINT_CEQCTL_ITR_INDX_S 11 +#define IRDMA_GLINT_CEQCTL_ITR_INDX_M (0x3 << IRDMA_GLINT_CEQCTL_ITR_INDX_S) + +#define IRDMA_GLINT_CEQCTL_CAUSE_ENA_S 30 +#define IRDMA_GLINT_CEQCTL_CAUSE_ENA_M (0x1 << IRDMA_GLINT_CEQCTL_CAUSE_ENA_S) + +#define IRDMA_GLINT_CEQCTL_MSIX_INDX_S 0 +#define IRDMA_GLINT_CEQCTL_MSIX_INDX_M (0x7ff << IRDMA_GLINT_CEQCTL_MSIX_INDX_S) + +#define IRDMA_PFINT_AEQCTL_MSIX_INDX_S 0 +#define IRDMA_PFINT_AEQCTL_MSIX_INDX_M (0x7ff << IRDMA_PFINT_AEQCTL_MSIX_INDX_S) + +#define IRDMA_PFINT_AEQCTL_ITR_INDX_S 11 +#define IRDMA_PFINT_AEQCTL_ITR_INDX_M (0x3 << IRDMA_PFINT_AEQCTL_ITR_INDX_S) + +#define IRDMA_PFINT_AEQCTL_CAUSE_ENA_S 30 +#define IRDMA_PFINT_AEQCTL_CAUSE_ENA_M (0x1 << IRDMA_PFINT_AEQCTL_CAUSE_ENA_S) + +#define IRDMA_PFHMC_PDINV_PMSDIDX_S 0 +#define IRDMA_PFHMC_PDINV_PMSDIDX_M (0xfff << IRDMA_PFHMC_PDINV_PMSDIDX_S) + +#define IRDMA_PFHMC_PDINV_PMSDPARTSEL_S 15 +#define IRDMA_PFHMC_PDINV_PMSDPARTSEL_M (0x1 << IRDMA_PFHMC_PDINV_PMSDPARTSEL_S) + +#define IRDMA_PFHMC_PDINV_PMPDIDX_S 16 +#define IRDMA_PFHMC_PDINV_PMPDIDX_M (0x1ff << IRDMA_PFHMC_PDINV_PMPDIDX_S) + +#define IRDMA_PFHMC_SDDATALOW_PMSDVALID_S 0 +#define IRDMA_PFHMC_SDDATALOW_PMSDVALID_M (0x1 << IRDMA_PFHMC_SDDATALOW_PMSDVALID_S) +#define IRDMA_PFHMC_SDDATALOW_PMSDTYPE_S 1 +#define IRDMA_PFHMC_SDDATALOW_PMSDTYPE_M (0x1 << IRDMA_PFHMC_SDDATALOW_PMSDTYPE_S) +#define IRDMA_PFHMC_SDDATALOW_PMSDBPCOUNT_S 2 +#define IRDMA_PFHMC_SDDATALOW_PMSDBPCOUNT_M (0x3ff << IRDMA_PFHMC_SDDATALOW_PMSDBPCOUNT_S) +#define IRDMA_PFHMC_SDDATALOW_PMSDDATALOW_S 12 +#define IRDMA_PFHMC_SDDATALOW_PMSDDATALOW_M (0xfffff << IRDMA_PFHMC_SDDATALOW_PMSDDATALOW_S) + +#define IRDMA_PFHMC_SDCMD_PMSDWR_S 31 +#define IRDMA_PFHMC_SDCMD_PMSDWR_M (0x1 << IRDMA_PFHMC_SDCMD_PMSDWR_S) + +#define IRDMA_INVALID_CQ_IDX 0xffffffff + +/* I40IW FW VER which supports RTS AE and CQ RESIZE */ +#define IRDMA_FW_VER_0x30010 0x30010 +/* IRDMA FW VER */ +#define IRDMA_FW_VER_0x1000D 0x1000D +enum irdma_registers { + IRDMA_CQPTAIL, + IRDMA_CQPDB, + IRDMA_CCQPSTATUS, + IRDMA_CCQPHIGH, + IRDMA_CCQPLOW, + IRDMA_CQARM, + IRDMA_CQACK, + IRDMA_AEQALLOC, + IRDMA_CQPERRCODES, + IRDMA_WQEALLOC, + IRDMA_GLINT_DYN_CTL, + IRDMA_DB_ADDR_OFFSET, + IRDMA_GLPCI_LBARCTRL, + IRDMA_GLPE_CPUSTATUS0, + IRDMA_GLPE_CPUSTATUS1, + IRDMA_GLPE_CPUSTATUS2, + IRDMA_PFINT_AEQCTL, + IRDMA_GLINT_CEQCTL, + IRDMA_VSIQF_PE_CTL1, + IRDMA_PFHMC_PDINV, + IRDMA_GLHMC_VFPDINV, + IRDMA_MAX_REGS, /* Must be last entry */ +}; + +enum irdma_shifts { + IRDMA_CCQPSTATUS_CCQP_DONE_S, + IRDMA_CCQPSTATUS_CCQP_ERR_S, + IRDMA_CQPSQ_STAG_PDID_S, + IRDMA_CQPSQ_CQ_CEQID_S, + IRDMA_CQPSQ_CQ_CQID_S, + IRDMA_MAX_SHIFTS, +}; + +enum irdma_masks { + IRDMA_CCQPSTATUS_CCQP_DONE_M, + IRDMA_CCQPSTATUS_CCQP_ERR_M, + IRDMA_CQPSQ_STAG_PDID_M, + IRDMA_CQPSQ_CQ_CEQID_M, + IRDMA_CQPSQ_CQ_CQID_M, + IRDMA_MAX_MASKS, /* Must be last entry */ +}; + +#define IRDMA_MAX_MGS_PER_CTX 8 + +struct irdma_mcast_grp_ctx_entry_info { + u32 qp_id; + bool valid_entry; + u16 dest_port; + u32 use_cnt; +}; + +struct irdma_mcast_grp_info { + u8 dest_mac_addr[ETH_ALEN]; + u16 vlan_id; + u8 hmc_fcn_id; + bool ipv4_valid:1; + bool vlan_valid:1; + u16 mg_id; + u32 no_of_mgs; + u32 dest_ip_addr[4]; + u16 qs_handle; + struct irdma_dma_mem dma_mem_mc; + struct irdma_mcast_grp_ctx_entry_info mg_ctx_info[IRDMA_MAX_MGS_PER_CTX]; +}; + +enum irdma_vers { + IRDMA_GEN_RSVD, + IRDMA_GEN_1, + IRDMA_GEN_2, + IRDMA_GEN_3, +}; + +struct irdma_uk_attrs { + u64 feature_flags; + u32 max_hw_wq_frags; + u32 max_hw_read_sges; + u32 max_hw_inline; + u32 max_hw_rq_quanta; + u32 max_hw_wq_quanta; + u32 min_hw_cq_size; + u32 max_hw_cq_size; + u16 max_hw_sq_chunk; + u8 hw_rev; +}; + +struct irdma_hw_attrs { + struct irdma_uk_attrs uk_attrs; + u64 max_hw_outbound_msg_size; + u64 max_hw_inbound_msg_size; + u64 max_mr_size; + u32 min_hw_qp_id; + u32 min_hw_aeq_size; + u32 max_hw_aeq_size; + u32 min_hw_ceq_size; + u32 max_hw_ceq_size; + u32 max_hw_device_pages; + u32 max_hw_vf_fpm_id; + u32 first_hw_vf_fpm_id; + u32 max_hw_ird; + u32 max_hw_ord; + u32 max_hw_wqes; + u32 max_hw_pds; + u32 max_hw_ena_vf_count; + u32 max_qp_wr; + u32 max_pe_ready_count; + u32 max_done_count; + u32 max_sleep_count; + u32 max_cqp_compl_wait_time_ms; + u16 max_stat_inst; +}; + +void icrdma_init_hw(struct irdma_sc_dev *dev); +#endif /* IRDMA_H*/ diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h new file mode 100644 index 000000000000..ec5daf16ffb4 --- /dev/null +++ b/drivers/infiniband/hw/irdma/type.h @@ -0,0 +1,1714 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_TYPE_H +#define IRDMA_TYPE_H +#include "osdep.h" +#include "irdma.h" +#include "user.h" +#include "hmc.h" +#include "uda.h" +#include "ws.h" + +#define IRDMA_DEBUG_ERR "ERR" +#define IRDMA_DEBUG_INIT "INIT" +#define IRDMA_DEBUG_DEV "DEV" +#define IRDMA_DEBUG_CM "CM" +#define IRDMA_DEBUG_VERBS "VERBS" +#define IRDMA_DEBUG_PUDA "PUDA" +#define IRDMA_DEBUG_ILQ "ILQ" +#define IRDMA_DEBUG_IEQ "IEQ" +#define IRDMA_DEBUG_QP "QP" +#define IRDMA_DEBUG_CQ "CQ" +#define IRDMA_DEBUG_MR "MR" +#define IRDMA_DEBUG_PBLE "PBLE" +#define IRDMA_DEBUG_WQE "WQE" +#define IRDMA_DEBUG_AEQ "AEQ" +#define IRDMA_DEBUG_CQP "CQP" +#define IRDMA_DEBUG_HMC "HMC" +#define IRDMA_DEBUG_USER "USER" +#define IRDMA_DEBUG_VIRT "VIRT" +#define IRDMA_DEBUG_DCB "DCB" +#define IRDMA_DEBUG_CQE "CQE" +#define IRDMA_DEBUG_CLNT "CLNT" +#define IRDMA_DEBUG_WS "WS" +#define IRDMA_DEBUG_STATS "STATS" + +enum irdma_page_size { + IRDMA_PAGE_SIZE_4K = 0, + IRDMA_PAGE_SIZE_2M, + IRDMA_PAGE_SIZE_1G, +}; + +enum irdma_hdrct_flags { + DDP_LEN_FLAG = 0x80, + DDP_HDR_FLAG = 0x40, + RDMA_HDR_FLAG = 0x20, +}; + +enum irdma_term_layers { + LAYER_RDMA = 0, + LAYER_DDP = 1, + LAYER_MPA = 2, +}; + +enum irdma_term_error_types { + RDMAP_REMOTE_PROT = 1, + RDMAP_REMOTE_OP = 2, + DDP_CATASTROPHIC = 0, + DDP_TAGGED_BUF = 1, + DDP_UNTAGGED_BUF = 2, + DDP_LLP = 3, +}; + +enum irdma_term_rdma_errors { + RDMAP_INV_STAG = 0x00, + RDMAP_INV_BOUNDS = 0x01, + RDMAP_ACCESS = 0x02, + RDMAP_UNASSOC_STAG = 0x03, + RDMAP_TO_WRAP = 0x04, + RDMAP_INV_RDMAP_VER = 0x05, + RDMAP_UNEXPECTED_OP = 0x06, + RDMAP_CATASTROPHIC_LOCAL = 0x07, + RDMAP_CATASTROPHIC_GLOBAL = 0x08, + RDMAP_CANT_INV_STAG = 0x09, + RDMAP_UNSPECIFIED = 0xff, +}; + +enum irdma_term_ddp_errors { + DDP_CATASTROPHIC_LOCAL = 0x00, + DDP_TAGGED_INV_STAG = 0x00, + DDP_TAGGED_BOUNDS = 0x01, + DDP_TAGGED_UNASSOC_STAG = 0x02, + DDP_TAGGED_TO_WRAP = 0x03, + DDP_TAGGED_INV_DDP_VER = 0x04, + DDP_UNTAGGED_INV_QN = 0x01, + DDP_UNTAGGED_INV_MSN_NO_BUF = 0x02, + DDP_UNTAGGED_INV_MSN_RANGE = 0x03, + DDP_UNTAGGED_INV_MO = 0x04, + DDP_UNTAGGED_INV_TOO_LONG = 0x05, + DDP_UNTAGGED_INV_DDP_VER = 0x06, +}; + +enum irdma_term_mpa_errors { + MPA_CLOSED = 0x01, + MPA_CRC = 0x02, + MPA_MARKER = 0x03, + MPA_REQ_RSP = 0x04, +}; + +enum irdma_flush_opcode { + FLUSH_INVALID = 0, + FLUSH_PROT_ERR, + FLUSH_REM_ACCESS_ERR, + FLUSH_LOC_QP_OP_ERR, + FLUSH_REM_OP_ERR, + FLUSH_LOC_LEN_ERR, + FLUSH_GENERAL_ERR, + FLUSH_FATAL_ERR, +}; + +enum irdma_term_eventtypes { + TERM_EVENT_QP_FATAL, + TERM_EVENT_QP_ACCESS_ERR, +}; + +enum irdma_hw_stats_index_32b { + IRDMA_HW_STAT_INDEX_IP4RXDISCARD = 0, + IRDMA_HW_STAT_INDEX_IP4RXTRUNC = 1, + IRDMA_HW_STAT_INDEX_IP4TXNOROUTE = 2, + IRDMA_HW_STAT_INDEX_IP6RXDISCARD = 3, + IRDMA_HW_STAT_INDEX_IP6RXTRUNC = 4, + IRDMA_HW_STAT_INDEX_IP6TXNOROUTE = 5, + IRDMA_HW_STAT_INDEX_TCPRTXSEG = 6, + IRDMA_HW_STAT_INDEX_TCPRXOPTERR = 7, + IRDMA_HW_STAT_INDEX_TCPRXPROTOERR = 8, + IRDMA_HW_STAT_INDEX_MAX_32_GEN_1 = 9, /* Must be same value as next entry */ + IRDMA_HW_STAT_INDEX_RXVLANERR = 9, + IRDMA_HW_STAT_INDEX_RXRPCNPHANDLED = 10, + IRDMA_HW_STAT_INDEX_RXRPCNPIGNORED = 11, + IRDMA_HW_STAT_INDEX_TXNPCNPSENT = 12, + IRDMA_HW_STAT_INDEX_MAX_32, /* Must be last entry */ +}; + +enum irdma_hw_stats_index_64b { + IRDMA_HW_STAT_INDEX_IP4RXOCTS = 0, + IRDMA_HW_STAT_INDEX_IP4RXPKTS = 1, + IRDMA_HW_STAT_INDEX_IP4RXFRAGS = 2, + IRDMA_HW_STAT_INDEX_IP4RXMCPKTS = 3, + IRDMA_HW_STAT_INDEX_IP4TXOCTS = 4, + IRDMA_HW_STAT_INDEX_IP4TXPKTS = 5, + IRDMA_HW_STAT_INDEX_IP4TXFRAGS = 6, + IRDMA_HW_STAT_INDEX_IP4TXMCPKTS = 7, + IRDMA_HW_STAT_INDEX_IP6RXOCTS = 8, + IRDMA_HW_STAT_INDEX_IP6RXPKTS = 9, + IRDMA_HW_STAT_INDEX_IP6RXFRAGS = 10, + IRDMA_HW_STAT_INDEX_IP6RXMCPKTS = 11, + IRDMA_HW_STAT_INDEX_IP6TXOCTS = 12, + IRDMA_HW_STAT_INDEX_IP6TXPKTS = 13, + IRDMA_HW_STAT_INDEX_IP6TXFRAGS = 14, + IRDMA_HW_STAT_INDEX_IP6TXMCPKTS = 15, + IRDMA_HW_STAT_INDEX_TCPRXSEGS = 16, + IRDMA_HW_STAT_INDEX_TCPTXSEG = 17, + IRDMA_HW_STAT_INDEX_RDMARXRDS = 18, + IRDMA_HW_STAT_INDEX_RDMARXSNDS = 19, + IRDMA_HW_STAT_INDEX_RDMARXWRS = 20, + IRDMA_HW_STAT_INDEX_RDMATXRDS = 21, + IRDMA_HW_STAT_INDEX_RDMATXSNDS = 22, + IRDMA_HW_STAT_INDEX_RDMATXWRS = 23, + IRDMA_HW_STAT_INDEX_RDMAVBND = 24, + IRDMA_HW_STAT_INDEX_RDMAVINV = 25, + IRDMA_HW_STAT_INDEX_MAX_64_GEN_1 = 26, /* Must be same value as next entry */ + IRDMA_HW_STAT_INDEX_IP4RXMCOCTS = 26, + IRDMA_HW_STAT_INDEX_IP4TXMCOCTS = 27, + IRDMA_HW_STAT_INDEX_IP6RXMCOCTS = 28, + IRDMA_HW_STAT_INDEX_IP6TXMCOCTS = 29, + IRDMA_HW_STAT_INDEX_UDPRXPKTS = 30, + IRDMA_HW_STAT_INDEX_UDPTXPKTS = 31, + IRDMA_HW_STAT_INDEX_RXNPECNMARKEDPKTS = 32, + IRDMA_HW_STAT_INDEX_MAX_64, /* Must be last entry */ +}; + +enum irdma_feature_type { + IRDMA_FEATURE_FW_INFO = 0, + IRDMA_HW_VERSION_INFO = 1, + IRDMA_QSETS_MAX = 26, + IRDMA_MAX_FEATURES, /* Must be last entry */ +}; + +enum irdma_sched_prio_type { + IRDMA_PRIO_WEIGHTED_RR = 1, + IRDMA_PRIO_STRICT = 2, + IRDMA_PRIO_WEIGHTED_STRICT = 3, +}; + +enum irdma_vm_vf_type { + IRDMA_VF_TYPE = 0, + IRDMA_VM_TYPE, + IRDMA_PF_TYPE, +}; + +enum irdma_cqp_hmc_profile { + IRDMA_HMC_PROFILE_DEFAULT = 1, + IRDMA_HMC_PROFILE_FAVOR_VF = 2, + IRDMA_HMC_PROFILE_EQUAL = 3, +}; + +enum irdma_quad_entry_type { + IRDMA_QHASH_TYPE_TCP_ESTABLISHED = 1, + IRDMA_QHASH_TYPE_TCP_SYN, + IRDMA_QHASH_TYPE_UDP_UNICAST, + IRDMA_QHASH_TYPE_UDP_MCAST, + IRDMA_QHASH_TYPE_ROCE_MCAST, + IRDMA_QHASH_TYPE_ROCEV2_HW, +}; + +enum irdma_quad_hash_manage_type { + IRDMA_QHASH_MANAGE_TYPE_DELETE = 0, + IRDMA_QHASH_MANAGE_TYPE_ADD, + IRDMA_QHASH_MANAGE_TYPE_MODIFY, +}; + +enum irdma_syn_rst_handling { + IRDMA_SYN_RST_HANDLING_HW_TCP_SECURE = 0, + IRDMA_SYN_RST_HANDLING_HW_TCP, + IRDMA_SYN_RST_HANDLING_FW_TCP_SECURE, + IRDMA_SYN_RST_HANDLING_FW_TCP, +}; + +struct irdma_sc_dev; +struct irdma_vsi_pestat; +struct irdma_irq_ops; +struct irdma_cqp_ops; +struct irdma_ccq_ops; +struct irdma_ceq_ops; +struct irdma_aeq_ops; +struct irdma_mr_ops; +struct irdma_cqp_misc_ops; +struct irdma_pd_ops; +struct irdma_ah_ops; +struct irdma_priv_qp_ops; +struct irdma_priv_cq_ops; +struct irdma_hmc_ops; + +struct irdma_cqp_init_info { + u64 cqp_compl_ctx; + u64 host_ctx_pa; + u64 sq_pa; + struct irdma_sc_dev *dev; + struct irdma_cqp_quanta *sq; + __le64 *host_ctx; + u64 *scratch_array; + u32 sq_size; + u16 hw_maj_ver; + u16 hw_min_ver; + u8 struct_ver; + u8 hmc_profile; + u8 ena_vf_count; + u8 ceqs_per_vf; + bool en_datacenter_tcp:1; + bool disable_packed:1; + bool rocev2_rto_policy:1; + enum irdma_protocol_used protocol_used; +}; + +struct irdma_terminate_hdr { + u8 layer_etype; + u8 error_code; + u8 hdrct; + u8 rsvd; +}; + +struct irdma_cqp_sq_wqe { + __le64 buf[IRDMA_CQP_WQE_SIZE]; +}; + +struct irdma_sc_aeqe { + __le64 buf[IRDMA_AEQE_SIZE]; +}; + +struct irdma_ceqe { + __le64 buf[IRDMA_CEQE_SIZE]; +}; + +struct irdma_cqp_ctx { + __le64 buf[IRDMA_CQP_CTX_SIZE]; +}; + +struct irdma_cq_shadow_area { + __le64 buf[IRDMA_SHADOW_AREA_SIZE]; +}; + +struct irdma_dev_hw_stats_offsets { + u32 stats_offset_32[IRDMA_HW_STAT_INDEX_MAX_32]; + u32 stats_offset_64[IRDMA_HW_STAT_INDEX_MAX_64]; +}; + +struct irdma_dev_hw_stats { + u64 stats_val_32[IRDMA_HW_STAT_INDEX_MAX_32]; + u64 stats_val_64[IRDMA_HW_STAT_INDEX_MAX_64]; +}; + +struct irdma_gather_stats { + u32 rsvd1; + u32 rxvlanerr; + u64 ip4rxocts; + u64 ip4rxpkts; + u32 ip4rxtrunc; + u32 ip4rxdiscard; + u64 ip4rxfrags; + u64 ip4rxmcocts; + u64 ip4rxmcpkts; + u64 ip6rxocts; + u64 ip6rxpkts; + u32 ip6rxtrunc; + u32 ip6rxdiscard; + u64 ip6rxfrags; + u64 ip6rxmcocts; + u64 ip6rxmcpkts; + u64 ip4txocts; + u64 ip4txpkts; + u64 ip4txfrag; + u64 ip4txmcocts; + u64 ip4txmcpkts; + u64 ip6txocts; + u64 ip6txpkts; + u64 ip6txfrags; + u64 ip6txmcocts; + u64 ip6txmcpkts; + u32 ip6txnoroute; + u32 ip4txnoroute; + u64 tcprxsegs; + u32 tcprxprotoerr; + u32 tcprxopterr; + u64 tcptxsegs; + u32 rsvd2; + u32 tcprtxseg; + u64 udprxpkts; + u64 udptxpkts; + u64 rdmarxwrs; + u64 rdmarxrds; + u64 rdmarxsnds; + u64 rdmatxwrs; + u64 rdmatxrds; + u64 rdmatxsnds; + u64 rdmavbn; + u64 rdmavinv; + u64 rxnpecnmrkpkts; + u32 rxrpcnphandled; + u32 rxrpcnpignored; + u32 txnpcnpsent; + u32 rsvd3[88]; +}; + +struct irdma_stats_gather_info { + bool use_hmc_fcn_index:1; + bool use_stats_inst:1; + u8 hmc_fcn_index; + u8 stats_inst_index; + struct irdma_dma_mem stats_buff_mem; + struct irdma_gather_stats *gather_stats; + struct irdma_gather_stats *last_gather_stats; +}; + +struct irdma_vsi_pestat { + struct irdma_hw *hw; + struct irdma_dev_hw_stats hw_stats; + struct irdma_stats_gather_info gather_info; + struct timer_list stats_timer; + struct irdma_sc_vsi *vsi; + struct irdma_dev_hw_stats last_hw_stats; + spinlock_t lock; /* rdma stats lock */ +}; + +struct irdma_hw { + u8 __iomem *hw_addr; + u8 __iomem *priv_hw_addr; + struct pci_dev *pdev; + struct irdma_hmc_info hmc; +}; + +struct irdma_pfpdu { + struct list_head rxlist; + u32 rcv_nxt; + u32 fps; + u32 max_fpdu_data; + u32 nextseqnum; + bool mode:1; + bool mpa_crc_err:1; + u64 total_ieq_bufs; + u64 fpdu_processed; + u64 bad_seq_num; + u64 crc_err; + u64 no_tx_bufs; + u64 tx_err; + u64 out_of_order; + u64 pmode_count; + struct irdma_sc_ah *ah; + struct irdma_puda_buf *ah_buf; + spinlock_t lock; /* fpdu processing lock */ + struct irdma_puda_buf *lastrcv_buf; +}; + +struct irdma_sc_pd { + struct irdma_sc_dev *dev; + u32 pd_id; + int abi_ver; +}; + +struct irdma_cqp_quanta { + __le64 elem[IRDMA_CQP_WQE_SIZE]; +}; + +struct irdma_sc_cqp { + u32 size; + u64 sq_pa; + u64 host_ctx_pa; + void *back_cqp; + struct irdma_sc_dev *dev; + enum irdma_status_code (*process_cqp_sds)(struct irdma_sc_dev *dev, + struct irdma_update_sds_info *info); + struct irdma_dma_mem sdbuf; + struct irdma_ring sq_ring; + struct irdma_cqp_quanta *sq_base; + __le64 *host_ctx; + u64 *scratch_array; + u32 cqp_id; + u32 sq_size; + u32 hw_sq_size; + u16 hw_maj_ver; + u16 hw_min_ver; + u8 struct_ver; + u8 polarity; + u8 hmc_profile; + u8 ena_vf_count; + u8 timeout_count; + u8 ceqs_per_vf; + bool en_datacenter_tcp:1; + bool disable_packed:1; + bool rocev2_rto_policy:1; + enum irdma_protocol_used protocol_used; +}; + +struct irdma_sc_aeq { + u32 size; + u64 aeq_elem_pa; + struct irdma_sc_dev *dev; + struct irdma_sc_aeqe *aeqe_base; + void *pbl_list; + u32 elem_cnt; + struct irdma_ring aeq_ring; + u8 pbl_chunk_size; + u32 first_pm_pbl_idx; + u8 polarity; + bool virtual_map:1; +}; + +struct irdma_sc_ceq { + u32 size; + u64 ceq_elem_pa; + struct irdma_sc_dev *dev; + struct irdma_ceqe *ceqe_base; + void *pbl_list; + u32 ceq_id; + u32 elem_cnt; + struct irdma_ring ceq_ring; + u8 pbl_chunk_size; + u8 tph_val; + u32 first_pm_pbl_idx; + u8 polarity; + struct irdma_sc_vsi *vsi; + struct irdma_sc_cq **reg_cq; + u32 reg_cq_size; + spinlock_t req_cq_lock; /* protect access to reg_cq array */ + bool virtual_map:1; + bool tph_en:1; + bool itr_no_expire:1; +}; + +struct irdma_sc_cq { + struct irdma_cq_uk cq_uk; + u64 cq_pa; + u64 shadow_area_pa; + struct irdma_sc_dev *dev; + struct irdma_sc_vsi *vsi; + void *pbl_list; + void *back_cq; + u32 ceq_id; + u32 shadow_read_threshold; + u8 pbl_chunk_size; + u8 cq_type; + u8 tph_val; + u32 first_pm_pbl_idx; + bool ceqe_mask:1; + bool virtual_map:1; + bool check_overflow:1; + bool ceq_id_valid:1; + bool tph_en; +}; + +struct irdma_sc_qp { + struct irdma_qp_uk qp_uk; + u64 sq_pa; + u64 rq_pa; + u64 hw_host_ctx_pa; + u64 shadow_area_pa; + u64 q2_pa; + struct irdma_sc_dev *dev; + struct irdma_sc_vsi *vsi; + struct irdma_sc_pd *pd; + __le64 *hw_host_ctx; + void *llp_stream_handle; + struct irdma_pfpdu pfpdu; + u32 ieq_qp; + u8 *q2_buf; + u64 qp_compl_ctx; + u16 qs_handle; + u16 push_idx; + u16 push_offset; + u8 flush_wqes_count; + u8 sq_tph_val; + u8 rq_tph_val; + u8 qp_state; + u8 qp_type; + u8 hw_sq_size; + u8 hw_rq_size; + u8 src_mac_addr_idx; + bool drop_ieq:1; + bool on_qoslist:1; + bool ieq_pass_thru:1; + bool sq_tph_en:1; + bool rq_tph_en:1; + bool rcv_tph_en:1; + bool xmit_tph_en:1; + bool virtual_map:1; + bool flush_sq:1; + bool flush_rq:1; + bool sq_flush:1; + enum irdma_flush_opcode flush_code; + enum irdma_term_eventtypes eventtype; + u8 term_flags; + u8 user_pri; + struct list_head list; +}; + +struct irdma_stats_inst_info { + bool use_hmc_fcn_index; + u8 hmc_fn_id; + u8 stats_idx; +}; + +struct irdma_up_info { + u8 map[8]; + u8 cnp_up_override; + u8 hmc_fcn_idx; + bool use_vlan:1; + bool use_cnp_up_override:1; +}; + +#define IRDMA_MAX_WS_NODES 0x3FF +#define IRDMA_WS_NODE_INVALID 0xFFFF + +struct irdma_ws_node_info { + u16 id; + u16 vsi; + u16 parent_id; + u16 qs_handle; + bool type_leaf:1; + bool enable:1; + u8 prio_type; + u8 tc; + u8 weight; +}; + +struct irdma_hmc_fpm_misc { + u32 max_ceqs; + u32 max_sds; + u32 xf_block_size; + u32 q1_block_size; + u32 ht_multiplier; + u32 timer_bucket; + u32 rrf_block_size; + u32 ooiscf_block_size; +}; + +#define IRDMA_LEAF_DEFAULT_REL_BW 64 +#define IRDMA_PARENT_DEFAULT_REL_BW 1 + +struct irdma_qos { + struct list_head qplist; + spinlock_t lock; /* protect qos list */ + u64 lan_qos_handle; + u32 l2_sched_node_id; + u16 qs_handle; + u8 traffic_class; + u8 rel_bw; + u8 prio_type; +}; + +#define IRDMA_INVALID_FCN_ID 0xff +struct irdma_sc_vsi { + u16 vsi_idx; + struct irdma_sc_dev *dev; + void *back_vsi; + u32 ilq_count; + struct irdma_virt_mem ilq_mem; + struct irdma_puda_rsrc *ilq; + u32 ieq_count; + struct irdma_virt_mem ieq_mem; + struct irdma_puda_rsrc *ieq; + u32 exception_lan_q; + u16 mtu; + u16 vm_id; + u8 fcn_id; + enum irdma_vm_vf_type vm_vf_type; + bool stats_fcn_id_alloc:1; + bool tc_change_pending:1; + struct irdma_qos qos[IRDMA_MAX_USER_PRIORITY]; + struct irdma_vsi_pestat *pestat; + atomic_t qp_suspend_reqs; + enum irdma_status_code (*register_qset)(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node); + void (*unregister_qset)(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node); + u8 qos_rel_bw; + u8 qos_prio_type; +}; + +struct irdma_sc_dev { + struct list_head cqp_cmd_head; /* head of the CQP command list */ + spinlock_t cqp_lock; /* protect CQP list access */ + struct irdma_dev_uk dev_uk; + bool fcn_id_array[IRDMA_MAX_STATS_COUNT]; + struct irdma_dma_mem vf_fpm_query_buf[IRDMA_MAX_PE_ENA_VF_COUNT]; + u64 fpm_query_buf_pa; + u64 fpm_commit_buf_pa; + __le64 *fpm_query_buf; + __le64 *fpm_commit_buf; + void *back_dev; + struct irdma_hw *hw; + u8 __iomem *db_addr; + u32 __iomem *wqe_alloc_db; + u32 __iomem *cq_arm_db; + u32 __iomem *aeq_alloc_db; + u32 __iomem *cqp_db; + u32 __iomem *cq_ack_db; + u32 __iomem *ceq_itr_mask_db; + u32 __iomem *aeq_itr_mask_db; + u32 __iomem *hw_regs[IRDMA_MAX_REGS]; + u64 hw_masks[IRDMA_MAX_MASKS]; + u64 hw_shifts[IRDMA_MAX_SHIFTS]; + u64 hw_stats_regs_32[IRDMA_HW_STAT_INDEX_MAX_32]; + u64 hw_stats_regs_64[IRDMA_HW_STAT_INDEX_MAX_64]; + u64 feature_info[IRDMA_MAX_FEATURES]; + u64 cqp_cmd_stats[IRDMA_OP_SIZE_CQP_STAT_ARRAY]; + struct irdma_hw_attrs hw_attrs; + struct irdma_hmc_info *hmc_info; + struct irdma_vfdev *vf_dev[IRDMA_MAX_PE_ENA_VF_COUNT]; + struct irdma_sc_cqp *cqp; + struct irdma_sc_aeq *aeq; + struct irdma_sc_ceq *ceq[IRDMA_CEQ_MAX_COUNT]; + struct irdma_sc_cq *ccq; + struct irdma_irq_ops *irq_ops; + struct irdma_cqp_ops *cqp_ops; + struct irdma_ccq_ops *ccq_ops; + struct irdma_ceq_ops *ceq_ops; + struct irdma_aeq_ops *aeq_ops; + struct irdma_pd_ops *iw_pd_ops; + struct irdma_ah_ops *iw_ah_ops; + struct irdma_priv_qp_ops *iw_priv_qp_ops; + struct irdma_priv_cq_ops *iw_priv_cq_ops; + struct irdma_mr_ops *mr_ops; + struct irdma_cqp_misc_ops *cqp_misc_ops; + struct irdma_hmc_ops *hmc_ops; + struct irdma_uda_ops *iw_uda_ops; + struct irdma_hmc_fpm_misc hmc_fpm_misc; + struct irdma_ws_node *ws_tree_root; + struct mutex ws_mutex; /* ws tree mutex */ + u16 num_vfs; + u8 hmc_fn_id; + u8 vf_id; + bool privileged:1; + bool vchnl_up:1; + bool ceq_valid:1; + u8 pci_rev; + enum irdma_status_code (*ws_add)(struct irdma_sc_vsi *vsi, u8 user_pri); + void (*ws_remove)(struct irdma_sc_vsi *vsi, u8 user_pri); + void (*ws_reset)(struct irdma_sc_vsi *vsi); +}; + +struct irdma_modify_cq_info { + u64 cq_pa; + struct irdma_cqe *cq_base; + u32 ceq_id; + u32 cq_size; + u32 shadow_read_threshold; + u8 pbl_chunk_size; + u32 first_pm_pbl_idx; + bool virtual_map:1; + bool check_overflow; + bool cq_resize:1; + bool ceq_valid:1; +}; + +struct irdma_create_qp_info { + bool ord_valid:1; + bool tcp_ctx_valid:1; + bool cq_num_valid:1; + bool arp_cache_idx_valid:1; + bool mac_valid:1; + bool force_lpb; + u8 next_iwarp_state; +}; + +struct irdma_modify_qp_info { + u64 rx_win0; + u64 rx_win1; + u16 new_mss; + u8 next_iwarp_state; + u8 curr_iwarp_state; + u8 termlen; + bool ord_valid:1; + bool tcp_ctx_valid:1; + bool udp_ctx_valid:1; + bool cq_num_valid:1; + bool arp_cache_idx_valid:1; + bool reset_tcp_conn:1; + bool remove_hash_idx:1; + bool dont_send_term:1; + bool dont_send_fin:1; + bool cached_var_valid:1; + bool mss_change:1; + bool force_lpb:1; + bool mac_valid:1; +}; + +struct irdma_ccq_cqe_info { + struct irdma_sc_cqp *cqp; + u64 scratch; + u32 op_ret_val; + u16 maj_err_code; + u16 min_err_code; + u8 op_code; + bool error; +}; + +struct irdma_dcb_app_info { + u8 priority; + u8 selector; + u16 prot_id; +}; + +struct irdma_qos_tc_info { + u64 tc_ctx; + u8 rel_bw; + u8 prio_type; + u8 egress_virt_up; + u8 ingress_virt_up; +}; + +struct irdma_l2params { + struct irdma_qos_tc_info tc_info[IRDMA_MAX_USER_PRIORITY]; + struct irdma_dcb_app_info apps[IRDMA_MAX_APPS]; + u32 num_apps; + u16 qs_handle_list[IRDMA_MAX_USER_PRIORITY]; + u16 mtu; + u8 up2tc[IRDMA_MAX_USER_PRIORITY]; + u8 num_tc; + u8 vsi_rel_bw; + u8 vsi_prio_type; + bool mtu_changed:1; + bool tc_changed:1; +}; + +struct irdma_vsi_init_info { + struct irdma_sc_dev *dev; + void *back_vsi; + struct irdma_l2params *params; + u16 exception_lan_q; + u16 pf_data_vsi_num; + enum irdma_vm_vf_type vm_vf_type; + u16 vm_id; + enum irdma_status_code (*register_qset)(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node); + void (*unregister_qset)(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node); +}; + +struct irdma_vsi_stats_info { + struct irdma_vsi_pestat *pestat; + u8 fcn_id; + bool alloc_fcn_id; +}; + +struct irdma_device_init_info { + u64 fpm_query_buf_pa; + u64 fpm_commit_buf_pa; + __le64 *fpm_query_buf; + __le64 *fpm_commit_buf; + struct irdma_hw *hw; + void __iomem *bar0; + enum irdma_status_code (*vchnl_send)(struct irdma_sc_dev *dev, + u32 vf_id, u8 *msg, u16 len); + void (*init_hw)(struct irdma_sc_dev *dev); + u8 hmc_fn_id; + bool privileged; +}; + +struct irdma_ceq_init_info { + u64 ceqe_pa; + struct irdma_sc_dev *dev; + u64 *ceqe_base; + void *pbl_list; + u32 elem_cnt; + u32 ceq_id; + bool virtual_map:1; + bool tph_en:1; + bool itr_no_expire:1; + u8 pbl_chunk_size; + u8 tph_val; + u32 first_pm_pbl_idx; + struct irdma_sc_vsi *vsi; + struct irdma_sc_cq **reg_cq; + u32 reg_cq_idx; +}; + +struct irdma_aeq_init_info { + u64 aeq_elem_pa; + struct irdma_sc_dev *dev; + u32 *aeqe_base; + void *pbl_list; + u32 elem_cnt; + bool virtual_map; + u8 pbl_chunk_size; + u32 first_pm_pbl_idx; +}; + +struct irdma_ccq_init_info { + u64 cq_pa; + u64 shadow_area_pa; + struct irdma_sc_dev *dev; + struct irdma_cqe *cq_base; + __le64 *shadow_area; + void *pbl_list; + u32 num_elem; + u32 ceq_id; + u32 shadow_read_threshold; + bool ceqe_mask:1; + bool ceq_id_valid:1; + bool avoid_mem_cflct:1; + bool virtual_map:1; + bool tph_en:1; + u8 tph_val; + u8 pbl_chunk_size; + u32 first_pm_pbl_idx; + struct irdma_sc_vsi *vsi; +}; + +struct irdma_udp_offload_info { + bool ipv4:1; + bool insert_vlan_tag:1; + u8 ttl; + u8 tos; + u16 src_port; + u16 dst_port; + u32 dest_ip_addr0; + u32 dest_ip_addr1; + u32 dest_ip_addr2; + u32 dest_ip_addr3; + u32 snd_mss; + u16 vlan_tag; + u16 arp_idx; + u32 flow_label; + u8 udp_state; + u32 psn_nxt; + u32 lsn; + u32 epsn; + u32 psn_max; + u32 psn_una; + u32 local_ipaddr0; + u32 local_ipaddr1; + u32 local_ipaddr2; + u32 local_ipaddr3; + u32 cwnd; + u8 rexmit_thresh; + u8 rnr_nak_thresh; +}; + +struct irdma_roce_offload_info { + u16 p_key; + u16 err_rq_idx; + u32 qkey; + u32 dest_qp; + u32 local_qp; + u8 roce_tver; + u8 ack_credits; + u8 err_rq_idx_valid; + u32 pd_id; + u16 ord_size; + u8 ird_size; + bool is_qp1:1; + bool udprivcq_en:1; + bool dcqcn_en:1; + bool rcv_no_icrc:1; + bool wr_rdresp_en:1; + bool bind_en:1; + bool fast_reg_en:1; + bool priv_mode_en:1; + bool rd_en:1; + bool timely_en:1; + bool dctcp_en:1; + bool fw_cc_enable:1; + bool use_stats_inst:1; + u16 t_high; + u16 t_low; + u8 last_byte_sent; + u8 mac_addr[ETH_ALEN]; + +}; + +struct irdma_iwarp_offload_info { + u16 rcv_mark_offset; + u16 snd_mark_offset; + u8 ddp_ver; + u8 rdmap_ver; + u8 iwarp_mode; + + u16 err_rq_idx; + u32 pd_id; + u16 ord_size; + u8 ird_size; + bool ib_rd_en:1; + bool align_hdrs:1; + bool rcv_no_mpa_crc:1; + bool err_rq_idx_valid:1; + bool snd_mark_en:1; + bool rcv_mark_en:1; + bool wr_rdresp_en:1; + bool bind_en:1; + bool fast_reg_en:1; + bool priv_mode_en:1; + bool rd_en:1; + bool timely_en:1; + bool use_stats_inst:1; + bool ecn_en:1; + bool dctcp_en:1; + u16 t_high; + u16 t_low; + u8 last_byte_sent; + u8 mac_addr[ETH_ALEN]; + u8 rtomin; +}; + +struct irdma_tcp_offload_info { + bool ipv4:1; + bool no_nagle:1; + bool insert_vlan_tag:1; + bool time_stamp:1; + bool drop_ooo_seg:1; + bool avoid_stretch_ack:1; + bool wscale:1; + bool ignore_tcp_opt:1; + bool ignore_tcp_uns_opt:1; + u8 cwnd_inc_limit; + u8 dup_ack_thresh; + u8 ttl; + u8 src_mac_addr_idx; + u8 tos; + u16 src_port; + u16 dst_port; + u32 dest_ip_addr0; + u32 dest_ip_addr1; + u32 dest_ip_addr2; + u32 dest_ip_addr3; + u32 snd_mss; + u16 syn_rst_handling; + u16 vlan_tag; + u16 arp_idx; + u32 flow_label; + u8 tcp_state; + u8 snd_wscale; + u8 rcv_wscale; + u32 time_stamp_recent; + u32 time_stamp_age; + u32 snd_nxt; + u32 snd_wnd; + u32 rcv_nxt; + u32 rcv_wnd; + u32 snd_max; + u32 snd_una; + u32 srtt; + u32 rtt_var; + u32 ss_thresh; + u32 cwnd; + u32 snd_wl1; + u32 snd_wl2; + u32 max_snd_window; + u8 rexmit_thresh; + u32 local_ipaddr0; + u32 local_ipaddr1; + u32 local_ipaddr2; + u32 local_ipaddr3; +}; + +struct irdma_qp_host_ctx_info { + u64 qp_compl_ctx; + union { + struct irdma_tcp_offload_info *tcp_info; + struct irdma_udp_offload_info *udp_info; + }; + union { + struct irdma_iwarp_offload_info *iwarp_info; + struct irdma_roce_offload_info *roce_info; + }; + u32 send_cq_num; + u32 rcv_cq_num; + u32 rem_endpoint_idx; + u8 stats_idx; + bool srq_valid:1; + bool tcp_info_valid:1; + bool iwarp_info_valid:1; + bool stats_idx_valid:1; + bool add_to_qoslist:1; + u8 user_pri; +}; + +struct irdma_aeqe_info { + u64 compl_ctx; + u32 qp_cq_id; + u16 ae_id; + u16 wqe_idx; + u8 tcp_state; + u8 iwarp_state; + bool qp:1; + bool cq:1; + bool sq:1; + bool in_rdrsp_wr:1; + bool out_rdrsp:1; + bool aeqe_overflow:1; + u8 q2_data_written; +}; + +struct irdma_allocate_stag_info { + u64 total_len; + u64 first_pm_pbl_idx; + u32 chunk_size; + u32 stag_idx; + u32 page_size; + u32 pd_id; + u16 access_rights; + bool remote_access:1; + bool use_hmc_fcn_index:1; + bool use_pf_rid:1; + u8 hmc_fcn_index; +}; + +struct irdma_mw_alloc_info { + u32 mw_stag_index; + u32 page_size; + u32 pd_id; + bool remote_access:1; + bool mw_wide:1; + bool mw1_bind_dont_vldt_key:1; +}; + +struct irdma_reg_ns_stag_info { + u64 reg_addr_pa; + u64 fbo; + void *va; + u64 total_len; + u32 page_size; + u32 chunk_size; + u32 first_pm_pbl_index; + enum irdma_addressing_type addr_type; + irdma_stag_index stag_idx; + u16 access_rights; + u32 pd_id; + irdma_stag_key stag_key; + bool use_hmc_fcn_index:1; + u8 hmc_fcn_index; + bool use_pf_rid:1; +}; + +struct irdma_fast_reg_stag_info { + u64 wr_id; + u64 reg_addr_pa; + u64 fbo; + void *va; + u64 total_len; + u32 page_size; + u32 chunk_size; + u32 first_pm_pbl_index; + enum irdma_addressing_type addr_type; + irdma_stag_index stag_idx; + u16 access_rights; + u32 pd_id; + irdma_stag_key stag_key; + bool local_fence:1; + bool read_fence:1; + bool signaled:1; + bool push_wqe:1; + bool use_hmc_fcn_index:1; + u8 hmc_fcn_index; + bool use_pf_rid:1; + bool defer_flag:1; +}; + +struct irdma_dealloc_stag_info { + u32 stag_idx; + u32 pd_id; + bool mr:1; + bool dealloc_pbl:1; +}; + +struct irdma_register_shared_stag { + void *va; + enum irdma_addressing_type addr_type; + irdma_stag_index new_stag_idx; + irdma_stag_index parent_stag_idx; + u32 access_rights; + u32 pd_id; + irdma_stag_key new_stag_key; +}; + +struct irdma_qp_init_info { + struct irdma_qp_uk_init_info qp_uk_init_info; + struct irdma_sc_pd *pd; + struct irdma_sc_vsi *vsi; + __le64 *host_ctx; + u8 *q2; + u64 sq_pa; + u64 rq_pa; + u64 host_ctx_pa; + u64 q2_pa; + u64 shadow_area_pa; + u8 sq_tph_val; + u8 rq_tph_val; + u8 type; + bool sq_tph_en:1; + bool rq_tph_en:1; + bool rcv_tph_en:1; + bool xmit_tph_en:1; + bool virtual_map:1; +}; + +struct irdma_cq_init_info { + struct irdma_sc_dev *dev; + u64 cq_base_pa; + u64 shadow_area_pa; + u32 ceq_id; + u32 shadow_read_threshold; + u8 pbl_chunk_size; + u32 first_pm_pbl_idx; + bool virtual_map:1; + bool ceqe_mask:1; + bool ceq_id_valid:1; + bool tph_en:1; + u8 tph_val; + u8 type; + struct irdma_cq_uk_init_info cq_uk_init_info; + struct irdma_sc_vsi *vsi; +}; + +struct irdma_upload_context_info { + u64 buf_pa; + u32 qp_id; + u8 qp_type; + bool freeze_qp:1; + bool raw_format:1; +}; + +struct irdma_local_mac_entry_info { + u8 mac_addr[6]; + u16 entry_idx; +}; + +struct irdma_add_arp_cache_entry_info { + u8 mac_addr[ETH_ALEN]; + u32 reach_max; + u16 arp_index; + bool permanent; +}; + +struct irdma_apbvt_info { + u16 port; + bool add; +}; + +struct irdma_qhash_table_info { + struct irdma_sc_vsi *vsi; + enum irdma_quad_hash_manage_type manage; + enum irdma_quad_entry_type entry_type; + bool vlan_valid:1; + bool ipv4_valid:1; + u8 mac_addr[ETH_ALEN]; + u16 vlan_id; + u8 user_pri; + u32 qp_num; + u32 dest_ip[4]; + u32 src_ip[4]; + u16 dest_port; + u16 src_port; +}; + +struct irdma_cqp_manage_push_page_info { + u32 push_idx; + u16 qs_handle; + u8 free_page; + u8 push_page_type; +}; + +struct irdma_qp_flush_info { + u16 sq_minor_code; + u16 sq_major_code; + u16 rq_minor_code; + u16 rq_major_code; + u16 ae_code; + u8 ae_src; + bool sq:1; + bool rq:1; + bool userflushcode:1; + bool generate_ae:1; +}; + +struct irdma_gen_ae_info { + u16 ae_code; + u8 ae_src; +}; + +struct irdma_cqp_timeout { + u64 compl_cqp_cmds; + u32 count; +}; + +struct irdma_irq_ops { + void (*irdma_cfg_aeq)(struct irdma_sc_dev *dev, u32 idx); + void (*irdma_cfg_ceq)(struct irdma_sc_dev *dev, u32 ceq_id, u32 idx); + void (*irdma_dis_irq)(struct irdma_sc_dev *dev, u32 idx); + void (*irdma_en_irq)(struct irdma_sc_dev *dev, u32 idx); +}; + +struct irdma_cqp_ops { + void (*check_cqp_progress)(struct irdma_cqp_timeout *cqp_timeout, + struct irdma_sc_dev *dev); + enum irdma_status_code (*cqp_create)(struct irdma_sc_cqp *cqp, + u16 *maj_err, u16 *min_err); + enum irdma_status_code (*cqp_destroy)(struct irdma_sc_cqp *cqp); + __le64 *(*cqp_get_next_send_wqe)(struct irdma_sc_cqp *cqp, u64 scratch); + enum irdma_status_code (*cqp_init)(struct irdma_sc_cqp *cqp, + struct irdma_cqp_init_info *info); + void (*cqp_post_sq)(struct irdma_sc_cqp *cqp); + enum irdma_status_code (*poll_for_cqp_op_done)(struct irdma_sc_cqp *cqp, + u8 opcode, + struct irdma_ccq_cqe_info *cmpl_info); +}; + +struct irdma_ccq_ops { + void (*ccq_arm)(struct irdma_sc_cq *ccq); + enum irdma_status_code (*ccq_create)(struct irdma_sc_cq *ccq, + u64 scratch, bool check_overflow, + bool post_sq); + enum irdma_status_code (*ccq_create_done)(struct irdma_sc_cq *ccq); + enum irdma_status_code (*ccq_destroy)(struct irdma_sc_cq *ccq, u64 scratch, bool post_sq); + enum irdma_status_code (*ccq_get_cqe_info)(struct irdma_sc_cq *ccq, + struct irdma_ccq_cqe_info *info); + enum irdma_status_code (*ccq_init)(struct irdma_sc_cq *ccq, + struct irdma_ccq_init_info *info); +}; + +struct irdma_ceq_ops { + enum irdma_status_code (*ceq_create)(struct irdma_sc_ceq *ceq, + u64 scratch, bool post_sq); + enum irdma_status_code (*cceq_create_done)(struct irdma_sc_ceq *ceq); + enum irdma_status_code (*cceq_destroy_done)(struct irdma_sc_ceq *ceq); + enum irdma_status_code (*cceq_create)(struct irdma_sc_ceq *ceq, + u64 scratch); + enum irdma_status_code (*ceq_destroy)(struct irdma_sc_ceq *ceq, + u64 scratch, bool post_sq); + enum irdma_status_code (*ceq_init)(struct irdma_sc_ceq *ceq, + struct irdma_ceq_init_info *info); + void *(*process_ceq)(struct irdma_sc_dev *dev, + struct irdma_sc_ceq *ceq); +}; + +struct irdma_aeq_ops { + enum irdma_status_code (*aeq_init)(struct irdma_sc_aeq *aeq, + struct irdma_aeq_init_info *info); + enum irdma_status_code (*aeq_create)(struct irdma_sc_aeq *aeq, + u64 scratch, bool post_sq); + enum irdma_status_code (*aeq_destroy)(struct irdma_sc_aeq *aeq, + u64 scratch, bool post_sq); + enum irdma_status_code (*get_next_aeqe)(struct irdma_sc_aeq *aeq, + struct irdma_aeqe_info *info); + enum irdma_status_code (*repost_aeq_entries)(struct irdma_sc_dev *dev, + u32 count); + enum irdma_status_code (*aeq_create_done)(struct irdma_sc_aeq *aeq); + enum irdma_status_code (*aeq_destroy_done)(struct irdma_sc_aeq *aeq); +}; + +struct irdma_pd_ops { + void (*pd_init)(struct irdma_sc_dev *dev, struct irdma_sc_pd *pd, + u32 pd_id, int abi_ver); +}; + +struct irdma_priv_qp_ops { + enum irdma_status_code (*iw_mr_fast_register)(struct irdma_sc_qp *qp, + struct irdma_fast_reg_stag_info *info, + bool post_sq); + enum irdma_status_code (*qp_create)(struct irdma_sc_qp *qp, + struct irdma_create_qp_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*qp_destroy)(struct irdma_sc_qp *qp, + u64 scratch, bool remove_hash_idx, + bool ignore_mw_bnd, bool post_sq); + enum irdma_status_code (*qp_flush_wqes)(struct irdma_sc_qp *qp, + struct irdma_qp_flush_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*qp_init)(struct irdma_sc_qp *qp, + struct irdma_qp_init_info *info); + enum irdma_status_code (*qp_modify)(struct irdma_sc_qp *qp, + struct irdma_modify_qp_info *info, + u64 scratch, bool post_sq); + void (*qp_send_lsmm)(struct irdma_sc_qp *qp, void *lsmm_buf, u32 size, + irdma_stag stag); + void (*qp_send_lsmm_nostag)(struct irdma_sc_qp *qp, void *lsmm_buf, + u32 size); + void (*qp_send_rtt)(struct irdma_sc_qp *qp, bool read); + enum irdma_status_code (*qp_setctx)(struct irdma_sc_qp *qp, + __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info); + enum irdma_status_code (*qp_setctx_roce)(struct irdma_sc_qp *qp, __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info); + enum irdma_status_code (*qp_upload_context)(struct irdma_sc_dev *dev, + struct irdma_upload_context_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*update_suspend_qp)(struct irdma_sc_cqp *cqp, + struct irdma_sc_qp *qp, + u64 scratch); + enum irdma_status_code (*update_resume_qp)(struct irdma_sc_cqp *cqp, + struct irdma_sc_qp *qp, + u64 scratch); +}; + +struct irdma_priv_cq_ops { + void (*cq_ack)(struct irdma_sc_cq *cq); + enum irdma_status_code (*cq_create)(struct irdma_sc_cq *cq, u64 scratch, + bool check_overflow, bool post_sq); + enum irdma_status_code (*cq_destroy)(struct irdma_sc_cq *cq, + u64 scratch, bool post_sq); + enum irdma_status_code (*cq_init)(struct irdma_sc_cq *cq, + struct irdma_cq_init_info *info); + enum irdma_status_code (*cq_modify)(struct irdma_sc_cq *cq, + struct irdma_modify_cq_info *info, + u64 scratch, bool post_sq); + void (*cq_resize)(struct irdma_sc_cq *cq, struct irdma_modify_cq_info *info); +}; + +struct irdma_mr_ops { + enum irdma_status_code (*alloc_stag)(struct irdma_sc_dev *dev, + struct irdma_allocate_stag_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*dealloc_stag)(struct irdma_sc_dev *dev, + struct irdma_dealloc_stag_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*mr_reg_non_shared)(struct irdma_sc_dev *dev, + struct irdma_reg_ns_stag_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*mr_reg_shared)(struct irdma_sc_dev *dev, + struct irdma_register_shared_stag *stag, + u64 scratch, bool post_sq); + enum irdma_status_code (*mw_alloc)(struct irdma_sc_dev *dev, + struct irdma_mw_alloc_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*query_stag)(struct irdma_sc_dev *dev, u64 scratch, + u32 stag_index, bool post_sq); +}; + +struct irdma_cqp_misc_ops { + enum irdma_status_code (*add_arp_cache_entry)(struct irdma_sc_cqp *cqp, + struct irdma_add_arp_cache_entry_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*add_local_mac_entry)(struct irdma_sc_cqp *cqp, + struct irdma_local_mac_entry_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*alloc_local_mac_entry)(struct irdma_sc_cqp *cqp, + u64 scratch, + bool post_sq); + enum irdma_status_code (*cqp_nop)(struct irdma_sc_cqp *cqp, u64 scratch, bool post_sq); + enum irdma_status_code (*del_arp_cache_entry)(struct irdma_sc_cqp *cqp, + u64 scratch, + u16 arp_index, + bool post_sq); + enum irdma_status_code (*del_local_mac_entry)(struct irdma_sc_cqp *cqp, + u64 scratch, + u16 entry_idx, + u8 ignore_ref_count, + bool post_sq); + enum irdma_status_code (*gather_stats)(struct irdma_sc_cqp *cqp, + struct irdma_stats_gather_info *info, + u64 scratch); + enum irdma_status_code (*manage_apbvt_entry)(struct irdma_sc_cqp *cqp, + struct irdma_apbvt_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*manage_push_page)(struct irdma_sc_cqp *cqp, + struct irdma_cqp_manage_push_page_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*manage_qhash_table_entry)(struct irdma_sc_cqp *cqp, + struct irdma_qhash_table_info *info, + u64 scratch, bool post_sq); + enum irdma_status_code (*manage_stats_instance)(struct irdma_sc_cqp *cqp, + struct irdma_stats_inst_info *info, + bool alloc, u64 scratch); + enum irdma_status_code (*manage_ws_node)(struct irdma_sc_cqp *cqp, + struct irdma_ws_node_info *info, + enum irdma_ws_node_op node_op, + u64 scratch); + enum irdma_status_code (*query_arp_cache_entry)(struct irdma_sc_cqp *cqp, + u64 scratch, u16 arp_index, bool post_sq); + enum irdma_status_code (*query_rdma_features)(struct irdma_sc_cqp *cqp, + struct irdma_dma_mem *buf, + u64 scratch); + enum irdma_status_code (*set_up_map)(struct irdma_sc_cqp *cqp, + struct irdma_up_info *info, + u64 scratch); +}; + +struct irdma_hmc_ops { + enum irdma_status_code (*cfg_iw_fpm)(struct irdma_sc_dev *dev, + u8 hmc_fn_id); + enum irdma_status_code (*commit_fpm_val)(struct irdma_sc_cqp *cqp, + u64 scratch, u8 hmc_fn_id, + struct irdma_dma_mem *commit_fpm_mem, + bool post_sq, u8 wait_type); + enum irdma_status_code (*commit_fpm_val_done)(struct irdma_sc_cqp *cqp); + enum irdma_status_code (*create_hmc_object)(struct irdma_sc_dev *dev, + struct irdma_hmc_create_obj_info *info); + enum irdma_status_code (*del_hmc_object)(struct irdma_sc_dev *dev, + struct irdma_hmc_del_obj_info *info, + bool reset); + enum irdma_status_code (*init_iw_hmc)(struct irdma_sc_dev *dev, u8 hmc_fn_id); + enum irdma_status_code (*manage_hmc_pm_func_table)(struct irdma_sc_cqp *cqp, + struct irdma_hmc_fcn_info *info, + u64 scratch, + bool post_sq); + enum irdma_status_code (*manage_hmc_pm_func_table_done)(struct irdma_sc_cqp *cqp); + enum irdma_status_code (*parse_fpm_commit_buf)(struct irdma_sc_dev *dev, + __le64 *buf, + struct irdma_hmc_obj_info *info, + u32 *sd); + enum irdma_status_code (*parse_fpm_query_buf)(struct irdma_sc_dev *dev, + __le64 *buf, + struct irdma_hmc_info *hmc_info, + struct irdma_hmc_fpm_misc *hmc_fpm_misc); + enum irdma_status_code (*pf_init_vfhmc)(struct irdma_sc_dev *dev, + u8 vf_hmc_fn_id, + u32 *vf_cnt_array); + enum irdma_status_code (*query_fpm_val)(struct irdma_sc_cqp *cqp, + u64 scratch, + u8 hmc_fn_id, + struct irdma_dma_mem *query_fpm_mem, + bool post_sq, u8 wait_type); + enum irdma_status_code (*query_fpm_val_done)(struct irdma_sc_cqp *cqp); + enum irdma_status_code (*static_hmc_pages_allocated)(struct irdma_sc_cqp *cqp, + u64 scratch, + u8 hmc_fn_id, + bool post_sq, + bool poll_registers); + enum irdma_status_code (*vf_cfg_vffpm)(struct irdma_sc_dev *dev, u32 *vf_cnt_array); +}; + +struct cqp_info { + union { + struct { + struct irdma_sc_qp *qp; + struct irdma_create_qp_info info; + u64 scratch; + } qp_create; + + struct { + struct irdma_sc_qp *qp; + struct irdma_modify_qp_info info; + u64 scratch; + } qp_modify; + + struct { + struct irdma_sc_qp *qp; + u64 scratch; + bool remove_hash_idx; + bool ignore_mw_bnd; + } qp_destroy; + + struct { + struct irdma_sc_cq *cq; + u64 scratch; + bool check_overflow; + } cq_create; + + struct { + struct irdma_sc_cq *cq; + struct irdma_modify_cq_info info; + u64 scratch; + } cq_modify; + + struct { + struct irdma_sc_cq *cq; + u64 scratch; + } cq_destroy; + + struct { + struct irdma_sc_dev *dev; + struct irdma_allocate_stag_info info; + u64 scratch; + } alloc_stag; + + struct { + struct irdma_sc_dev *dev; + struct irdma_mw_alloc_info info; + u64 scratch; + } mw_alloc; + + struct { + struct irdma_sc_dev *dev; + struct irdma_reg_ns_stag_info info; + u64 scratch; + } mr_reg_non_shared; + + struct { + struct irdma_sc_dev *dev; + struct irdma_dealloc_stag_info info; + u64 scratch; + } dealloc_stag; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_add_arp_cache_entry_info info; + u64 scratch; + } add_arp_cache_entry; + + struct { + struct irdma_sc_cqp *cqp; + u64 scratch; + u16 arp_index; + } del_arp_cache_entry; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_local_mac_entry_info info; + u64 scratch; + } add_local_mac_entry; + + struct { + struct irdma_sc_cqp *cqp; + u64 scratch; + u8 entry_idx; + u8 ignore_ref_count; + } del_local_mac_entry; + + struct { + struct irdma_sc_cqp *cqp; + u64 scratch; + } alloc_local_mac_entry; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_cqp_manage_push_page_info info; + u64 scratch; + } manage_push_page; + + struct { + struct irdma_sc_dev *dev; + struct irdma_upload_context_info info; + u64 scratch; + } qp_upload_context; + + struct { + struct irdma_sc_dev *dev; + struct irdma_hmc_fcn_info info; + u64 scratch; + } manage_hmc_pm; + + struct { + struct irdma_sc_ceq *ceq; + u64 scratch; + } ceq_create; + + struct { + struct irdma_sc_ceq *ceq; + u64 scratch; + } ceq_destroy; + + struct { + struct irdma_sc_aeq *aeq; + u64 scratch; + } aeq_create; + + struct { + struct irdma_sc_aeq *aeq; + u64 scratch; + } aeq_destroy; + + struct { + struct irdma_sc_qp *qp; + struct irdma_qp_flush_info info; + u64 scratch; + } qp_flush_wqes; + + struct { + struct irdma_sc_qp *qp; + struct irdma_gen_ae_info info; + u64 scratch; + } gen_ae; + + struct { + struct irdma_sc_cqp *cqp; + void *fpm_val_va; + u64 fpm_val_pa; + u8 hmc_fn_id; + u64 scratch; + } query_fpm_val; + + struct { + struct irdma_sc_cqp *cqp; + void *fpm_val_va; + u64 fpm_val_pa; + u8 hmc_fn_id; + u64 scratch; + } commit_fpm_val; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_apbvt_info info; + u64 scratch; + } manage_apbvt_entry; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_qhash_table_info info; + u64 scratch; + } manage_qhash_table_entry; + + struct { + struct irdma_sc_dev *dev; + struct irdma_update_sds_info info; + u64 scratch; + } update_pe_sds; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_sc_qp *qp; + u64 scratch; + } suspend_resume; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_ah_info info; + u64 scratch; + } ah_create; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_ah_info info; + u64 scratch; + } ah_destroy; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_mcast_grp_info info; + u64 scratch; + } mc_create; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_mcast_grp_info info; + u64 scratch; + } mc_destroy; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_mcast_grp_info info; + u64 scratch; + } mc_modify; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_stats_inst_info info; + u64 scratch; + } stats_manage; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_stats_gather_info info; + u64 scratch; + } stats_gather; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_ws_node_info info; + u64 scratch; + } ws_node; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_up_info info; + u64 scratch; + } up_map; + + struct { + struct irdma_sc_cqp *cqp; + struct irdma_dma_mem query_buff_mem; + u64 scratch; + } query_rdma; + } u; +}; + +struct cqp_cmds_info { + struct list_head cqp_cmd_entry; + u8 cqp_cmd; + u8 post_sq; + struct cqp_info in; +}; + +struct irdma_virtchnl_work_info { + void (*callback_fcn)(void *vf_dev); + void *worker_vf_dev; +}; +#endif /* IRDMA_TYPE_H */ From patchwork Fri Apr 17 17:12:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 108ECC3815B for ; Fri, 17 Apr 2020 17:13:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E91BD2078E for ; Fri, 17 Apr 2020 17:13:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729072AbgDQRNA (ORCPT ); Fri, 17 Apr 2020 13:13:00 -0400 Received: from mga01.intel.com ([192.55.52.88]:30119 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728534AbgDQRM6 (ORCPT ); Fri, 17 Apr 2020 13:12:58 -0400 IronPort-SDR: qo2hFIEqF0eL5Um6BYABqiDFjmwv+ngtYd3XRKOC3W9dZgRsSG2LJJFiMLfKMNUPlZZ24Cy9vv sfl6jNUuokSw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:55 -0700 IronPort-SDR: k+pXinb4zF7xtkcYzeIT1U2Frr3rmoEBO1vgSqXrm7/05cbLzsWy/0xwZguyqufre0Z7sp/Oj1 F1Z/8ZErv5VQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383716" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:55 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 06/16] RDMA/irdma: Add QoS definitions Date: Fri, 17 Apr 2020 10:12:41 -0700 Message-Id: <20200417171251.1533371-7-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Add definitions for managing the RDMA HW work scheduler (WS) tree. A WS node is created via a control QP operation with the bandwidth allocation, arbitration scheme, and traffic class of the QP specified. The Qset handle returned associates the QoS parameters for the QP. The Qset is registered with the LAN and a equivalent node is created in the LAN packet scheduler tree. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/ws.c | 395 +++++++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/ws.h | 39 +++ 2 files changed, 434 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/ws.c create mode 100644 drivers/infiniband/hw/irdma/ws.h diff --git a/drivers/infiniband/hw/irdma/ws.c b/drivers/infiniband/hw/irdma/ws.c new file mode 100644 index 000000000000..4828d7b0aee6 --- /dev/null +++ b/drivers/infiniband/hw/irdma/ws.c @@ -0,0 +1,395 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2019 Intel Corporation */ +#include "osdep.h" +#include "status.h" +#include "hmc.h" +#include "defs.h" +#include "type.h" +#include "protos.h" + +#include "ws.h" + +/** + * irdma_alloc_node - Allocate a WS node and init + * @vsi: vsi pointer + * @user_pri: user priority + * @node_type: Type of node, leaf or parent + * @parent: parent node pointer + */ +static struct irdma_ws_node *irdma_alloc_node(struct irdma_sc_vsi *vsi, + u8 user_pri, + enum irdma_ws_node_type node_type, + struct irdma_ws_node *parent) +{ + struct irdma_virt_mem ws_mem; + struct irdma_ws_node *node; + u16 node_index = 0; + + ws_mem.size = sizeof(struct irdma_ws_node); + ws_mem.va = kzalloc(ws_mem.size, GFP_ATOMIC); + if (!ws_mem.va) + return NULL; + + if (parent || vsi->vm_vf_type == IRDMA_VF_TYPE) { + node_index = irdma_alloc_ws_node_id(vsi->dev); + if (node_index == IRDMA_WS_NODE_INVALID) { + kfree(ws_mem.va); + return NULL; + } + } + + node = ws_mem.va; + node->index = node_index; + node->vsi_index = vsi->vsi_idx; + INIT_LIST_HEAD(&node->child_list_head); + if (node_type == WS_NODE_TYPE_LEAF) { + node->type_leaf = true; + node->traffic_class = vsi->qos[user_pri].traffic_class; + node->user_pri = user_pri; + node->rel_bw = vsi->qos[user_pri].rel_bw; + if (!node->rel_bw) + node->rel_bw = 1; + + node->lan_qs_handle = vsi->qos[user_pri].lan_qos_handle; + node->prio_type = IRDMA_PRIO_WEIGHTED_RR; + } else { + node->rel_bw = 1; + node->prio_type = IRDMA_PRIO_WEIGHTED_RR; + } + + node->parent = parent; + + return node; +} + +/** + * irdma_free_node - Free a WS node + * @vsi: VSI stricture of device + * @node: Pointer to node to free + */ +static void irdma_free_node(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *node) +{ + struct irdma_virt_mem ws_mem; + + if (node->index) + irdma_free_ws_node_id(vsi->dev, node->index); + + ws_mem.va = node; + ws_mem.size = sizeof(struct irdma_ws_node); + kfree(ws_mem.va); +} + +/** + * irdma_ws_cqp_cmd - Post CQP work scheduler node cmd + * @vsi: vsi pointer + * @node: pointer to node + * @cmd: add, remove or modify + */ +static enum irdma_status_code +irdma_ws_cqp_cmd(struct irdma_sc_vsi *vsi, struct irdma_ws_node *node, u8 cmd) +{ + struct irdma_ws_node_info node_info = {}; + + node_info.id = node->index; + node_info.vsi = node->vsi_index; + if (node->parent) + node_info.parent_id = node->parent->index; + else + node_info.parent_id = node_info.id; + + node_info.weight = node->rel_bw; + node_info.tc = node->traffic_class; + node_info.prio_type = node->prio_type; + node_info.type_leaf = node->type_leaf; + node_info.enable = node->enable; + if (irdma_cqp_ws_node_cmd(vsi->dev, cmd, &node_info)) { + dev_dbg(rfdev_to_dev(vsi->dev), "WS: CQP WS CMD failed\n"); + return IRDMA_ERR_NO_MEMORY; + } + + if (node->type_leaf && cmd == IRDMA_OP_WS_ADD_NODE) { + node->qs_handle = node_info.qs_handle; + vsi->qos[node->user_pri].qs_handle = node_info.qs_handle; + } + + return 0; +} + +/** + * ws_find_node - Find SC WS node based on VSI id or TC + * @parent: parent node of First VSI or TC node + * @match_val: value to match + * @type: match type VSI/TC + */ +static struct irdma_ws_node *ws_find_node(struct irdma_ws_node *parent, + u16 match_val, + enum irdma_ws_match_type type) +{ + struct irdma_ws_node *node; + + switch (type) { + case WS_MATCH_TYPE_VSI: + list_for_each_entry(node, &parent->child_list_head, siblings) { + if (node->vsi_index == match_val) + return node; + } + break; + case WS_MATCH_TYPE_TC: + list_for_each_entry(node, &parent->child_list_head, siblings) { + if (node->traffic_class == match_val) + return node; + } + break; + default: + break; + } + + return NULL; +} + +/** + * irdma_tc_in_use - Checks to see if a leaf node is in use + * @vsi: vsi pointer + * @user_pri: user priority + */ +static bool irdma_tc_in_use(struct irdma_sc_vsi *vsi, u8 user_pri) +{ + unsigned long flags; + int i; + + spin_lock_irqsave(&vsi->qos[user_pri].lock, flags); + if (!list_empty(&vsi->qos[user_pri].qplist)) { + spin_unlock_irqrestore(&vsi->qos[user_pri].lock, flags); + return true; + } + + /* Check if the traffic class associated with the given user priority + * is in use by any other user priority. If so, nothing left to do + */ + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + if (vsi->qos[i].traffic_class == vsi->qos[user_pri].traffic_class && + !list_empty(&vsi->qos[i].qplist)) { + spin_unlock_irqrestore(&vsi->qos[user_pri].lock, flags); + return true; + } + } + spin_unlock_irqrestore(&vsi->qos[user_pri].lock, flags); + + return false; +} + +/** + * irdma_remove_leaf - Remove leaf node unconditionally + * @vsi: vsi pointer + * @user_pri: user priority + */ +static void irdma_remove_leaf(struct irdma_sc_vsi *vsi, u8 user_pri) +{ + struct irdma_ws_node *ws_tree_root, *vsi_node, *tc_node; + + ws_tree_root = vsi->dev->ws_tree_root; + if (!ws_tree_root) + return; + + vsi_node = ws_find_node(ws_tree_root, vsi->vsi_idx, + WS_MATCH_TYPE_VSI); + if (!vsi_node) + return; + + tc_node = ws_find_node(vsi_node, + vsi->qos[user_pri].traffic_class, + WS_MATCH_TYPE_TC); + if (!tc_node) + return; + + irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_DELETE_NODE); + vsi->unregister_qset(vsi, tc_node); + list_del(&tc_node->siblings); + irdma_free_node(vsi, tc_node); + /* Check if VSI node can be freed */ + if (list_empty(&vsi_node->child_list_head)) { + irdma_ws_cqp_cmd(vsi, vsi_node, IRDMA_OP_WS_DELETE_NODE); + list_del(&vsi_node->siblings); + irdma_free_node(vsi, vsi_node); + /* Free head node there are no remaining VSI nodes */ + if (list_empty(&ws_tree_root->child_list_head)) { + irdma_ws_cqp_cmd(vsi, ws_tree_root, + IRDMA_OP_WS_DELETE_NODE); + irdma_free_node(vsi, ws_tree_root); + vsi->dev->ws_tree_root = NULL; + } + } +} + +/** + * irdma_ws_add - Build work scheduler tree, set RDMA qs_handle + * @vsi: vsi pointer + * @user_pri: user priority + */ +enum irdma_status_code irdma_ws_add(struct irdma_sc_vsi *vsi, u8 user_pri) +{ + struct irdma_ws_node *ws_tree_root; + struct irdma_ws_node *vsi_node; + struct irdma_ws_node *tc_node; + u16 traffic_class; + enum irdma_status_code ret = 0; + int i; + + mutex_lock(&vsi->dev->ws_mutex); + if (vsi->tc_change_pending) { + ret = IRDMA_ERR_NOT_READY; + goto exit; + } + + ws_tree_root = vsi->dev->ws_tree_root; + if (!ws_tree_root) { + dev_dbg(rfdev_to_dev(vsi->dev), "WS: Creating root node\n"); + ws_tree_root = irdma_alloc_node(vsi, user_pri, + WS_NODE_TYPE_PARENT, NULL); + if (!ws_tree_root) { + ret = IRDMA_ERR_NO_MEMORY; + goto exit; + } + + ret = irdma_ws_cqp_cmd(vsi, ws_tree_root, IRDMA_OP_WS_ADD_NODE); + if (ret) { + irdma_free_node(vsi, ws_tree_root); + goto exit; + } + + vsi->dev->ws_tree_root = ws_tree_root; + } + + /* Find a second tier node that matches the VSI */ + vsi_node = ws_find_node(ws_tree_root, vsi->vsi_idx, + WS_MATCH_TYPE_VSI); + + /* If VSI node doesn't exist, add one */ + if (!vsi_node) { + dev_dbg(rfdev_to_dev(vsi->dev), + "WS: Node not found matching VSI %d\n", vsi->vsi_idx); + vsi_node = irdma_alloc_node(vsi, user_pri, WS_NODE_TYPE_PARENT, + ws_tree_root); + if (!vsi_node) { + ret = IRDMA_ERR_NO_MEMORY; + goto vsi_add_err; + } + + ret = irdma_ws_cqp_cmd(vsi, vsi_node, IRDMA_OP_WS_ADD_NODE); + if (ret) { + irdma_free_node(vsi, vsi_node); + goto vsi_add_err; + } + + list_add(&vsi_node->siblings, &ws_tree_root->child_list_head); + } + + dev_dbg(rfdev_to_dev(vsi->dev), + "WS: Using node %d which represents VSI %d\n", + vsi_node->index, vsi->vsi_idx); + traffic_class = vsi->qos[user_pri].traffic_class; + tc_node = ws_find_node(vsi_node, traffic_class, + WS_MATCH_TYPE_TC); + if (!tc_node) { + /* Add leaf node */ + dev_dbg(rfdev_to_dev(vsi->dev), + "WS: Node not found matching VSI %d and TC %d\n", + vsi->vsi_idx, traffic_class); + tc_node = irdma_alloc_node(vsi, user_pri, WS_NODE_TYPE_LEAF, + vsi_node); + if (!tc_node) { + ret = IRDMA_ERR_NO_MEMORY; + goto leaf_add_err; + } + + ret = irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_ADD_NODE); + if (ret) { + irdma_free_node(vsi, tc_node); + goto leaf_add_err; + } + + list_add(&tc_node->siblings, &vsi_node->child_list_head); + /* + * callback to LAN to update the LAN tree with our node + */ + ret = vsi->register_qset(vsi, tc_node); + if (ret) + goto reg_err; + + tc_node->enable = true; + ret = irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_MODIFY_NODE); + if (ret) + goto reg_err; + } + dev_dbg(rfdev_to_dev(vsi->dev), + "WS: Using node %d which represents VSI %d TC %d\n", + tc_node->index, vsi->vsi_idx, traffic_class); + /* + * Iterate through other UPs and update the QS handle if they have + * a matching traffic class. + */ + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + if (vsi->qos[i].traffic_class == traffic_class) { + vsi->qos[i].qs_handle = tc_node->qs_handle; + vsi->qos[i].lan_qos_handle = tc_node->lan_qs_handle; + vsi->qos[i].l2_sched_node_id = tc_node->l2_sched_node_id; + } + } + goto exit; + +leaf_add_err: + if (list_empty(&vsi_node->child_list_head)) { + if (irdma_ws_cqp_cmd(vsi, vsi_node, IRDMA_OP_WS_DELETE_NODE)) + goto exit; + list_del(&vsi_node->siblings); + irdma_free_node(vsi, vsi_node); + } + +vsi_add_err: + /* Free head node there are no remaining VSI nodes */ + if (list_empty(&ws_tree_root->child_list_head)) { + irdma_ws_cqp_cmd(vsi, ws_tree_root, IRDMA_OP_WS_DELETE_NODE); + vsi->dev->ws_tree_root = NULL; + irdma_free_node(vsi, ws_tree_root); + } + +exit: + mutex_unlock(&vsi->dev->ws_mutex); + return ret; + +reg_err: + mutex_unlock(&vsi->dev->ws_mutex); + irdma_ws_remove(vsi, user_pri); + return ret; +} + +/** + * irdma_ws_remove - Free WS scheduler node, update WS tree + * @vsi: vsi pointer + * @user_pri: user priority + */ +void irdma_ws_remove(struct irdma_sc_vsi *vsi, u8 user_pri) +{ + mutex_lock(&vsi->dev->ws_mutex); + if (irdma_tc_in_use(vsi, user_pri)) + goto exit; + + irdma_remove_leaf(vsi, user_pri); +exit: + mutex_unlock(&vsi->dev->ws_mutex); +} + +/** + * irdma_ws_reset - Reset entire WS tree + * @vsi: vsi pointer + */ +void irdma_ws_reset(struct irdma_sc_vsi *vsi) +{ + u8 i; + + mutex_lock(&vsi->dev->ws_mutex); + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; ++i) + irdma_remove_leaf(vsi, i); + mutex_unlock(&vsi->dev->ws_mutex); +} diff --git a/drivers/infiniband/hw/irdma/ws.h b/drivers/infiniband/hw/irdma/ws.h new file mode 100644 index 000000000000..442864284801 --- /dev/null +++ b/drivers/infiniband/hw/irdma/ws.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_WS_H +#define IRDMA_WS_H + +#include "osdep.h" + +enum irdma_ws_node_type { + WS_NODE_TYPE_PARENT, + WS_NODE_TYPE_LEAF, +}; + +enum irdma_ws_match_type { + WS_MATCH_TYPE_VSI, + WS_MATCH_TYPE_TC, +}; + +struct irdma_ws_node { + struct list_head siblings; + struct list_head child_list_head; + struct irdma_ws_node *parent; + u64 lan_qs_handle; /* opaque handle used by LAN */ + u32 l2_sched_node_id; + u16 index; + u16 qs_handle; + u16 vsi_index; + u8 traffic_class; + u8 user_pri; + u8 rel_bw; + u8 abstraction_layer; /* used for splitting a TC */ + u8 prio_type; + bool type_leaf:1; + bool enable:1; +}; + +enum irdma_status_code irdma_ws_add(struct irdma_sc_vsi *vsi, u8 user_pri); +void irdma_ws_remove(struct irdma_sc_vsi *vsi, u8 user_pri); +void irdma_ws_reset(struct irdma_sc_vsi *vsi); +#endif /* IRDMA_WS_H */ From patchwork Fri Apr 17 17:12:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 433BDC38A2B for ; Fri, 17 Apr 2020 17:13:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 202E82076D for ; Fri, 17 Apr 2020 17:13:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729431AbgDQRNF (ORCPT ); Fri, 17 Apr 2020 13:13:05 -0400 Received: from mga01.intel.com ([192.55.52.88]:30114 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728534AbgDQRNE (ORCPT ); Fri, 17 Apr 2020 13:13:04 -0400 IronPort-SDR: v/aiqpK+fBlvxoezLxrlD7IWOOrxInJcTXxcCMXJ37G0ug8M6pggcm0xjHiDei92Lzo4A0T5fy yRTycVb+xYxg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:56 -0700 IronPort-SDR: gZXZQVFtFSrdK8BfIhrmTyreNQxaA0h56VpnuGLBlrjCSa7lwJxxPjQNne109cr/knknmnv4MF 4UZs7xxNtJlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383726" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:56 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 08/16] RDMA/irdma: Add PBLE resource manager Date: Fri, 17 Apr 2020 10:12:43 -0700 Message-Id: <20200417171251.1533371-9-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Implement a Physical Buffer List Entry (PBLE) resource manager to manage a pool of PBLE HMC resource objects. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/pble.c | 510 +++++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/pble.h | 135 ++++++++ 2 files changed, 645 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/pble.c create mode 100644 drivers/infiniband/hw/irdma/pble.h diff --git a/drivers/infiniband/hw/irdma/pble.c b/drivers/infiniband/hw/irdma/pble.c new file mode 100644 index 000000000000..225083726d2f --- /dev/null +++ b/drivers/infiniband/hw/irdma/pble.c @@ -0,0 +1,510 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include "osdep.h" +#include "status.h" +#include "hmc.h" +#include "defs.h" +#include "type.h" +#include "protos.h" +#include "pble.h" + +static enum irdma_status_code +add_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc); + +/** + * irdma_destroy_pble_prm - destroy prm during module unload + * @pble_rsrc: pble resources + */ +void irdma_destroy_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc) +{ + struct irdma_chunk *chunk; + struct irdma_pble_prm *pinfo = &pble_rsrc->pinfo; + + while (!list_empty(&pinfo->clist)) { + chunk = (struct irdma_chunk *)pinfo->clist.next; + list_del(&chunk->list); + if (chunk->type == PBLE_SD_PAGED) + irdma_pble_free_paged_mem(chunk); + if (chunk->bitmapbuf) + kfree(chunk->bitmapmem.va); + kfree(chunk->chunkmem.va); + } +} + +/** + * irdma_hmc_init_pble - Initialize pble resources during module load + * @dev: irdma_sc_dev struct + * @pble_rsrc: pble resources + */ +enum irdma_status_code +irdma_hmc_init_pble(struct irdma_sc_dev *dev, + struct irdma_hmc_pble_rsrc *pble_rsrc) +{ + struct irdma_hmc_info *hmc_info; + u32 fpm_idx = 0; + enum irdma_status_code status = 0; + + hmc_info = dev->hmc_info; + pble_rsrc->dev = dev; + pble_rsrc->fpm_base_addr = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].base; + /* Start pble' on 4k boundary */ + if (pble_rsrc->fpm_base_addr & 0xfff) + fpm_idx = (4096 - (pble_rsrc->fpm_base_addr & 0xfff)) >> 3; + pble_rsrc->unallocated_pble = + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt - fpm_idx; + pble_rsrc->next_fpm_addr = pble_rsrc->fpm_base_addr + (fpm_idx << 3); + pble_rsrc->pinfo.pble_shift = PBLE_SHIFT; + + spin_lock_init(&pble_rsrc->pinfo.prm_lock); + INIT_LIST_HEAD(&pble_rsrc->pinfo.clist); + if (add_pble_prm(pble_rsrc)) { + irdma_destroy_pble_prm(pble_rsrc); + status = IRDMA_ERR_NO_MEMORY; + } + + return status; +} + +/** + * get_sd_pd_idx - Returns sd index, pd index and rel_pd_idx from fpm address + * @pble_rsrc: structure containing fpm address + * @idx: where to return indexes + */ +static void get_sd_pd_idx(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct sd_pd_idx *idx) +{ + idx->sd_idx = (u32)pble_rsrc->next_fpm_addr / IRDMA_HMC_DIRECT_BP_SIZE; + idx->pd_idx = (u32)(pble_rsrc->next_fpm_addr / IRDMA_HMC_PAGED_BP_SIZE); + idx->rel_pd_idx = (idx->pd_idx % IRDMA_HMC_PD_CNT_IN_SD); +} + +/** + * add_sd_direct - add sd direct for pble + * @pble_rsrc: pble resource ptr + * @info: page info for sd + */ +static enum irdma_status_code +add_sd_direct(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_add_page_info *info) +{ + struct irdma_sc_dev *dev = pble_rsrc->dev; + enum irdma_status_code ret_code = 0; + struct sd_pd_idx *idx = &info->idx; + struct irdma_chunk *chunk = info->chunk; + struct irdma_hmc_info *hmc_info = info->hmc_info; + struct irdma_hmc_sd_entry *sd_entry = info->sd_entry; + u32 offset = 0; + + if (!sd_entry->valid) { + ret_code = irdma_add_sd_table_entry(dev->hw, hmc_info, + info->idx.sd_idx, + IRDMA_SD_TYPE_DIRECT, + IRDMA_HMC_DIRECT_BP_SIZE); + if (ret_code) + return ret_code; + + chunk->type = PBLE_SD_CONTIGOUS; + } + + offset = idx->rel_pd_idx << HMC_PAGED_BP_SHIFT; + chunk->size = info->pages << HMC_PAGED_BP_SHIFT; + chunk->vaddr = (uintptr_t)(sd_entry->u.bp.addr.va + offset); + chunk->fpm_addr = pble_rsrc->next_fpm_addr; + dev_dbg(rfdev_to_dev(dev), + "PBLE: chunk_size[%lld] = 0x%llx vaddr=0x%llx fpm_addr = %llx\n", + chunk->size, chunk->size, chunk->vaddr, chunk->fpm_addr); + + return 0; +} + +/** + * fpm_to_idx - given fpm address, get pble index + * @pble_rsrc: pble resource management + * @addr: fpm address for index + */ +static u32 fpm_to_idx(struct irdma_hmc_pble_rsrc *pble_rsrc, u64 addr) +{ + u64 idx; + + idx = (addr - (pble_rsrc->fpm_base_addr)) >> 3; + + return (u32)idx; +} + +/** + * add_bp_pages - add backing pages for sd + * @pble_rsrc: pble resource management + * @info: page info for sd + */ +static enum irdma_status_code +add_bp_pages(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_add_page_info *info) +{ + struct irdma_sc_dev *dev = pble_rsrc->dev; + u8 *addr; + struct irdma_dma_mem mem; + struct irdma_hmc_pd_entry *pd_entry; + struct irdma_hmc_sd_entry *sd_entry = info->sd_entry; + struct irdma_hmc_info *hmc_info = info->hmc_info; + struct irdma_chunk *chunk = info->chunk; + enum irdma_status_code status = 0; + u32 rel_pd_idx = info->idx.rel_pd_idx; + u32 pd_idx = info->idx.pd_idx; + u32 i; + + if (irdma_pble_get_paged_mem(chunk, info->pages)) + return IRDMA_ERR_NO_MEMORY; + + status = irdma_add_sd_table_entry(dev->hw, hmc_info, info->idx.sd_idx, + IRDMA_SD_TYPE_PAGED, + IRDMA_HMC_DIRECT_BP_SIZE); + + if (status) + goto error; + + addr = (u8 *)(uintptr_t)chunk->vaddr; + for (i = 0; i < info->pages; i++) { + mem.pa = (u64)chunk->dmainfo.dmaaddrs[i]; + mem.size = 4096; + mem.va = addr; + pd_entry = &sd_entry->u.pd_table.pd_entry[rel_pd_idx++]; + if (!pd_entry->valid) { + status = irdma_add_pd_table_entry(dev, hmc_info, + pd_idx++, &mem); + if (status) + goto error; + + addr += 4096; + } + } + + chunk->fpm_addr = pble_rsrc->next_fpm_addr; + return 0; + +error: + irdma_pble_free_paged_mem(chunk); + + return status; +} + +/** + * add_pble_prm - add a sd entry for pble resoure + * @pble_rsrc: pble resource management + */ +static enum irdma_status_code +add_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc) +{ + struct irdma_sc_dev *dev = pble_rsrc->dev; + struct irdma_hmc_sd_entry *sd_entry; + struct irdma_hmc_info *hmc_info; + struct irdma_chunk *chunk; + struct irdma_add_page_info info; + struct sd_pd_idx *idx = &info.idx; + enum irdma_status_code ret_code = 0; + enum irdma_sd_entry_type sd_entry_type; + u64 sd_reg_val = 0; + struct irdma_virt_mem chunkmem; + u32 pages; + + if (pble_rsrc->unallocated_pble < PBLE_PER_PAGE) + return IRDMA_ERR_NO_MEMORY; + + if (pble_rsrc->next_fpm_addr & 0xfff) + return IRDMA_ERR_INVALID_PAGE_DESC_INDEX; + + chunkmem.size = sizeof(*chunk); + chunkmem.va = kzalloc(chunkmem.size, GFP_ATOMIC); + if (!chunkmem.va) + return IRDMA_ERR_NO_MEMORY; + + chunk = chunkmem.va; + chunk->chunkmem = chunkmem; + hmc_info = dev->hmc_info; + chunk->dev = dev; + chunk->fpm_addr = pble_rsrc->next_fpm_addr; + get_sd_pd_idx(pble_rsrc, idx); + sd_entry = &hmc_info->sd_table.sd_entry[idx->sd_idx]; + pages = (idx->rel_pd_idx) ? (IRDMA_HMC_PD_CNT_IN_SD - idx->rel_pd_idx) : + IRDMA_HMC_PD_CNT_IN_SD; + pages = min(pages, pble_rsrc->unallocated_pble >> PBLE_512_SHIFT); + info.chunk = chunk; + info.hmc_info = hmc_info; + info.pages = pages; + info.sd_entry = sd_entry; + if (!sd_entry->valid) + sd_entry_type = (!idx->rel_pd_idx && + (pages == IRDMA_HMC_PD_CNT_IN_SD) && + dev->privileged) ? + IRDMA_SD_TYPE_DIRECT : IRDMA_SD_TYPE_PAGED; + else + sd_entry_type = sd_entry->entry_type; + + dev_dbg(rfdev_to_dev(dev), + "PBLE: pages = %d, unallocated_pble[%d] current_fpm_addr = %llx\n", + pages, pble_rsrc->unallocated_pble, pble_rsrc->next_fpm_addr); + dev_dbg(rfdev_to_dev(dev), "PBLE: sd_entry_type = %d\n", + sd_entry_type); + if (sd_entry_type == IRDMA_SD_TYPE_DIRECT) + ret_code = add_sd_direct(pble_rsrc, &info); + + if (ret_code) + sd_entry_type = IRDMA_SD_TYPE_PAGED; + else + pble_rsrc->stats_direct_sds++; + + if (sd_entry_type == IRDMA_SD_TYPE_PAGED) { + ret_code = add_bp_pages(pble_rsrc, &info); + if (ret_code) + goto error; + else + pble_rsrc->stats_paged_sds++; + } + + ret_code = irdma_prm_add_pble_mem(&pble_rsrc->pinfo, chunk); + if (ret_code) + goto error; + + pble_rsrc->next_fpm_addr += chunk->size; + dev_dbg(rfdev_to_dev(dev), + "PBLE: next_fpm_addr = %llx chunk_size[%llu] = 0x%llx\n", + pble_rsrc->next_fpm_addr, chunk->size, chunk->size); + pble_rsrc->unallocated_pble -= (u32)(chunk->size >> 3); + list_add(&chunk->list, &pble_rsrc->pinfo.clist); + sd_reg_val = (sd_entry_type == IRDMA_SD_TYPE_PAGED) ? + sd_entry->u.pd_table.pd_page_addr.pa : + sd_entry->u.bp.addr.pa; + if (sd_entry->valid) + return 0; + + if (dev->privileged) { + ret_code = irdma_hmc_sd_one(dev, hmc_info->hmc_fn_id, + sd_reg_val, idx->sd_idx, + sd_entry->entry_type, true); + if (ret_code) + goto error; + } + + sd_entry->valid = true; + return 0; + +error: + if (chunk->bitmapbuf) + kfree(chunk->bitmapmem.va); + + kfree(chunk->chunkmem.va); + + return ret_code; +} + +/** + * free_lvl2 - fee level 2 pble + * @pble_rsrc: pble resource management + * @palloc: level 2 pble allocation + */ +static void free_lvl2(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc) +{ + u32 i; + struct irdma_pble_level2 *lvl2 = &palloc->level2; + struct irdma_pble_info *root = &lvl2->root; + struct irdma_pble_info *leaf = lvl2->leaf; + + for (i = 0; i < lvl2->leaf_cnt; i++, leaf++) { + if (leaf->addr) + irdma_prm_return_pbles(&pble_rsrc->pinfo, + &leaf->chunkinfo); + else + break; + } + + if (root->addr) + irdma_prm_return_pbles(&pble_rsrc->pinfo, &root->chunkinfo); + + kfree(lvl2->leafmem.va); + lvl2->leaf = NULL; +} + +/** + * get_lvl2_pble - get level 2 pble resource + * @pble_rsrc: pble resource management + * @palloc: level 2 pble allocation + */ +static enum irdma_status_code +get_lvl2_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc) +{ + u32 lf4k, lflast, total, i; + u32 pblcnt = PBLE_PER_PAGE; + u64 *addr; + struct irdma_pble_level2 *lvl2 = &palloc->level2; + struct irdma_pble_info *root = &lvl2->root; + struct irdma_pble_info *leaf; + enum irdma_status_code ret_code; + u64 fpm_addr; + + /* number of full 512 (4K) leafs) */ + lf4k = palloc->total_cnt >> 9; + lflast = palloc->total_cnt % PBLE_PER_PAGE; + total = (lflast == 0) ? lf4k : lf4k + 1; + lvl2->leaf_cnt = total; + + lvl2->leafmem.size = (sizeof(*leaf) * total); + lvl2->leafmem.va = kzalloc(lvl2->leafmem.size, GFP_ATOMIC); + if (!lvl2->leafmem.va) + return IRDMA_ERR_NO_MEMORY; + + lvl2->leaf = lvl2->leafmem.va; + leaf = lvl2->leaf; + ret_code = irdma_prm_get_pbles(&pble_rsrc->pinfo, &root->chunkinfo, + total << 3, &root->addr, &fpm_addr); + if (ret_code) { + kfree(lvl2->leafmem.va); + lvl2->leaf = NULL; + return IRDMA_ERR_NO_MEMORY; + } + + root->idx = fpm_to_idx(pble_rsrc, fpm_addr); + root->cnt = total; + addr = (u64 *)(uintptr_t)root->addr; + for (i = 0; i < total; i++, leaf++) { + pblcnt = (lflast && ((i + 1) == total)) ? + lflast : PBLE_PER_PAGE; + ret_code = irdma_prm_get_pbles(&pble_rsrc->pinfo, + &leaf->chunkinfo, pblcnt << 3, + &leaf->addr, &fpm_addr); + if (ret_code) + goto error; + + leaf->idx = fpm_to_idx(pble_rsrc, fpm_addr); + + leaf->cnt = pblcnt; + *addr = (u64)leaf->idx; + addr++; + } + + palloc->level = PBLE_LEVEL_2; + pble_rsrc->stats_lvl2++; + return 0; + +error: + free_lvl2(pble_rsrc, palloc); + + return IRDMA_ERR_NO_MEMORY; +} + +/** + * get_lvl1_pble - get level 1 pble resource + * @pble_rsrc: pble resource management + * @palloc: level 1 pble allocation + */ +static enum irdma_status_code +get_lvl1_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc) +{ + enum irdma_status_code ret_code; + u64 fpm_addr, vaddr; + struct irdma_pble_info *lvl1 = &palloc->level1; + + ret_code = irdma_prm_get_pbles(&pble_rsrc->pinfo, &lvl1->chunkinfo, + palloc->total_cnt << 3, &vaddr, + &fpm_addr); + if (ret_code) + return IRDMA_ERR_NO_MEMORY; + + lvl1->addr = vaddr; + palloc->level = PBLE_LEVEL_1; + lvl1->idx = fpm_to_idx(pble_rsrc, fpm_addr); + lvl1->cnt = palloc->total_cnt; + pble_rsrc->stats_lvl1++; + + return 0; +} + +/** + * get_lvl1_lvl2_pble - calls get_lvl1 and get_lvl2 pble routine + * @pble_rsrc: pble resources + * @palloc: contains all inforamtion regarding pble (idx + pble addr) + * @level1_only: flag for a level 1 PBLE + */ +static enum irdma_status_code +get_lvl1_lvl2_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc, bool level1_only) +{ + enum irdma_status_code status = 0; + + status = get_lvl1_pble(pble_rsrc, palloc); + if (!status || level1_only || palloc->total_cnt <= PBLE_PER_PAGE) + return status; + + status = get_lvl2_pble(pble_rsrc, palloc); + + return status; +} + +/** + * irdma_get_pble - allocate pbles from the prm + * @pble_rsrc: pble resources + * @palloc: contains all inforamtion regarding pble (idx + pble addr) + * @pble_cnt: #of pbles requested + * @level1_only: true if only pble level 1 to acquire + */ +enum irdma_status_code irdma_get_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc, + u32 pble_cnt, bool level1_only) +{ + enum irdma_status_code status = 0; + unsigned long flags; + int max_sds = 0; + int i; + + palloc->total_cnt = pble_cnt; + palloc->level = PBLE_LEVEL_0; + spin_lock_irqsave(&pble_rsrc->pble_lock, flags); + /*check first to see if we can get pble's without acquiring + * additional sd's + */ + status = get_lvl1_lvl2_pble(pble_rsrc, palloc, level1_only); + if (!status) + goto exit; + + max_sds = (palloc->total_cnt >> 18) + 1; + for (i = 0; i < max_sds; i++) { + status = add_pble_prm(pble_rsrc); + if (status) + break; + + status = get_lvl1_lvl2_pble(pble_rsrc, palloc, level1_only); + /* if level1_only, only go through it once */ + if (!status || level1_only) + break; + } + +exit: + if (!status) { + pble_rsrc->allocdpbles += pble_cnt; + pble_rsrc->stats_alloc_ok++; + } else { + pble_rsrc->stats_alloc_fail++; + } + spin_unlock_irqrestore(&pble_rsrc->pble_lock, flags); + + return status; +} + +/** + * irdma_free_pble - put pbles back into prm + * @pble_rsrc: pble resources + * @palloc: contains all information regarding pble resource being freed + */ +void irdma_free_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc) +{ + pble_rsrc->freedpbles += palloc->total_cnt; + + if (palloc->level == PBLE_LEVEL_2) + free_lvl2(pble_rsrc, palloc); + else + irdma_prm_return_pbles(&pble_rsrc->pinfo, + &palloc->level1.chunkinfo); + pble_rsrc->stats_alloc_freed++; +} diff --git a/drivers/infiniband/hw/irdma/pble.h b/drivers/infiniband/hw/irdma/pble.h new file mode 100644 index 000000000000..59ed2c28ad11 --- /dev/null +++ b/drivers/infiniband/hw/irdma/pble.h @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_PBLE_H +#define IRDMA_PBLE_H + +#define PBLE_SHIFT 6 +#define PBLE_PER_PAGE 512 +#define HMC_PAGED_BP_SHIFT 12 +#define PBLE_512_SHIFT 9 +#define PBLE_INVALID_IDX 0xffffffff + +enum irdma_pble_level { + PBLE_LEVEL_0 = 0, + PBLE_LEVEL_1 = 1, + PBLE_LEVEL_2 = 2, +}; + +enum irdma_alloc_type { + PBLE_NO_ALLOC = 0, + PBLE_SD_CONTIGOUS = 1, + PBLE_SD_PAGED = 2, +}; + +struct irdma_chunk; + +struct irdma_pble_chunkinfo { + struct irdma_chunk *pchunk; + u64 bit_idx; + u64 bits_used; +}; + +struct irdma_pble_info { + u64 addr; + u32 idx; + u32 cnt; + struct irdma_pble_chunkinfo chunkinfo; +}; + +struct irdma_pble_level2 { + struct irdma_pble_info root; + struct irdma_pble_info *leaf; + struct irdma_virt_mem leafmem; + u32 leaf_cnt; +}; + +struct irdma_pble_alloc { + u32 total_cnt; + enum irdma_pble_level level; + union { + struct irdma_pble_info level1; + struct irdma_pble_level2 level2; + }; +}; + +struct sd_pd_idx { + u32 sd_idx; + u32 pd_idx; + u32 rel_pd_idx; +}; + +struct irdma_add_page_info { + struct irdma_chunk *chunk; + struct irdma_hmc_sd_entry *sd_entry; + struct irdma_hmc_info *hmc_info; + struct sd_pd_idx idx; + u32 pages; +}; + +struct irdma_chunk { + struct list_head list; + struct irdma_dma_info dmainfo; + void *bitmapbuf; + + u32 sizeofbitmap; + u64 size; + u64 vaddr; + u64 fpm_addr; + u32 pg_cnt; + enum irdma_alloc_type type; + struct irdma_sc_dev *dev; + struct irdma_virt_mem bitmapmem; + struct irdma_virt_mem chunkmem; +}; + +struct irdma_pble_prm { + struct list_head clist; + spinlock_t prm_lock; /* protect prm bitmap */ + u64 total_pble_alloc; + u64 free_pble_cnt; + u8 pble_shift; +}; + +struct irdma_hmc_pble_rsrc { + u32 unallocated_pble; + spinlock_t pble_lock; /* to serialize PBLE resource acquisition */ + struct irdma_sc_dev *dev; + u64 fpm_base_addr; + u64 next_fpm_addr; + struct irdma_pble_prm pinfo; + u64 allocdpbles; + u64 freedpbles; + u32 stats_direct_sds; + u32 stats_paged_sds; + u64 stats_alloc_ok; + u64 stats_alloc_fail; + u64 stats_alloc_freed; + u64 stats_lvl1; + u64 stats_lvl2; +}; + +void irdma_destroy_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc); +enum irdma_status_code +irdma_hmc_init_pble(struct irdma_sc_dev *dev, + struct irdma_hmc_pble_rsrc *pble_rsrc); +void irdma_free_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc); +enum irdma_status_code irdma_get_pble(struct irdma_hmc_pble_rsrc *pble_rsrc, + struct irdma_pble_alloc *palloc, + u32 pble_cnt, bool level1_only); +enum irdma_status_code irdma_prm_add_pble_mem(struct irdma_pble_prm *pprm, + struct irdma_chunk *pchunk); +enum irdma_status_code +irdma_prm_get_pbles(struct irdma_pble_prm *pprm, + struct irdma_pble_chunkinfo *chunkinfo, u32 mem_size, + u64 *vaddr, u64 *fpm_addr); +void irdma_prm_return_pbles(struct irdma_pble_prm *pprm, + struct irdma_pble_chunkinfo *chunkinfo); +void irdma_pble_acquire_lock(struct irdma_hmc_pble_rsrc *pble_rsrc, + unsigned long *flags); +void irdma_pble_release_lock(struct irdma_hmc_pble_rsrc *pble_rsrc, + unsigned long *flags); +void irdma_pble_free_paged_mem(struct irdma_chunk *chunk); +enum irdma_status_code irdma_pble_get_paged_mem(struct irdma_chunk *chunk, + int pg_cnt); +#endif /* IRDMA_PBLE_H */ From patchwork Fri Apr 17 17:12:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DE80C38A2D for ; Fri, 17 Apr 2020 17:13:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0B5320776 for ; Fri, 17 Apr 2020 17:13:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729646AbgDQRNq (ORCPT ); Fri, 17 Apr 2020 13:13:46 -0400 Received: from mga01.intel.com ([192.55.52.88]:30114 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728088AbgDQRNo (ORCPT ); Fri, 17 Apr 2020 13:13:44 -0400 IronPort-SDR: NeP42BhQNdl5NHGTmvBk0KwEy0Iay7q7Iglkuyuq1gbm4R8Bhz8YNNXR+mVnaGhN7f4+KIXW1w 3SMMW0oPMiWQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:57 -0700 IronPort-SDR: 7IQ7RQUIgDMPai7Bge5GbBFIZuPevARZfwbxrk4V63OVxWiJL1avcHHiYgPRBpIr/rTCnDN3Sa yN1P/QJKMwDg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383732" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:56 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 09/16] RDMA/irdma: Implement device supported verb APIs Date: Fri, 17 Apr 2020 10:12:44 -0700 Message-Id: <20200417171251.1533371-10-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Implement device supported verb APIs. The supported APIs vary based on the underlying transport the ibdev is registered as (i.e. iWARP or RoCEv2). Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/verbs.c | 4555 +++++++++++++++++++++++ drivers/infiniband/hw/irdma/verbs.h | 213 ++ include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + 3 files changed, 4769 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/verbs.c create mode 100644 drivers/infiniband/hw/irdma/verbs.h diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c new file mode 100644 index 000000000000..d8dad9545997 --- /dev/null +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -0,0 +1,4555 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "main.h" + +/** + * irdma_query_device - get device attributes + * @ibdev: device pointer from stack + * @props: returning device attributes + * @udata: user data + */ +static int irdma_query_device(struct ib_device *ibdev, + struct ib_device_attr *props, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + struct irdma_pci_f *rf = iwdev->rf; + struct pci_dev *pdev = iwdev->rf->pdev; + struct irdma_hw_attrs *hw_attrs = &rf->sc_dev.hw_attrs; + + if (udata->inlen || udata->outlen) + return -EINVAL; + + memset(props, 0, sizeof(*props)); + ether_addr_copy((u8 *)&props->sys_image_guid, iwdev->netdev->dev_addr); + props->fw_ver = (u64)FW_MAJOR_VER(&rf->sc_dev) << 32 | + FW_MINOR_VER(&rf->sc_dev); + props->device_cap_flags = iwdev->device_cap_flags; + props->vendor_id = pdev->vendor; + props->vendor_part_id = pdev->device; + props->hw_ver = (u32)rf->sc_dev.pci_rev; + props->max_mr_size = hw_attrs->max_mr_size; + props->max_qp = rf->max_qp - rf->used_qps; + props->max_qp_wr = hw_attrs->max_qp_wr; + props->max_send_sge = hw_attrs->uk_attrs.max_hw_wq_frags; + props->max_recv_sge = hw_attrs->uk_attrs.max_hw_wq_frags; + props->max_cq = rf->max_cq - rf->used_cqs; + props->max_cqe = rf->max_cqe; + props->max_mr = rf->max_mr - rf->used_mrs; + props->max_mw = props->max_mr; + props->max_pd = rf->max_pd - rf->used_pds; + props->max_sge_rd = hw_attrs->uk_attrs.max_hw_read_sges; + props->max_qp_rd_atom = hw_attrs->max_hw_ird; + props->max_qp_init_rd_atom = props->max_qp_rd_atom; + props->atomic_cap = IB_ATOMIC_NONE; + props->max_map_per_fmr = 1; + props->max_ah = rf->max_ah; + props->max_mcast_grp = rf->max_mcg; + props->max_mcast_qp_attach = IRDMA_MAX_MGS_PER_CTX; + props->max_total_mcast_qp_attach = rf->max_qp * IRDMA_MAX_MGS_PER_CTX; + props->max_fast_reg_page_list_len = IRDMA_MAX_PAGES_PER_FMR; + + return 0; +} + +/** + * irdma_get_eth_speed_and_width - Get IB port speed and width from netdev speed + * @link_speed: netdev phy link speed + * @active_speed: IB port speed + * @active_width: IB port width + */ +static void irdma_get_eth_speed_and_width(u32 link_speed, u8 *active_speed, + u8 *active_width) +{ + if (link_speed <= SPEED_1000) { + *active_width = IB_WIDTH_1X; + *active_speed = IB_SPEED_SDR; + } else if (link_speed <= SPEED_10000) { + *active_width = IB_WIDTH_1X; + *active_speed = IB_SPEED_FDR10; + } else if (link_speed <= SPEED_20000) { + *active_width = IB_WIDTH_4X; + *active_speed = IB_SPEED_DDR; + } else if (link_speed <= SPEED_25000) { + *active_width = IB_WIDTH_1X; + *active_speed = IB_SPEED_EDR; + } else if (link_speed <= SPEED_40000) { + *active_width = IB_WIDTH_4X; + *active_speed = IB_SPEED_FDR10; + } else { + *active_width = IB_WIDTH_4X; + *active_speed = IB_SPEED_EDR; + } +} + +/** + * irdma_query_port - get port attributes + * @ibdev: device pointer from stack + * @port: port number for query + * @props: returning device attributes + */ +static int irdma_query_port(struct ib_device *ibdev, u8 port, + struct ib_port_attr *props) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + struct net_device *netdev = iwdev->netdev; + + /* no need to zero out pros here. done by caller */ + props->max_mtu = IB_MTU_4096; + props->active_mtu = ib_mtu_int_to_enum(netdev->mtu); + props->lid = 1; + props->lmc = 0; + props->sm_lid = 0; + props->sm_sl = 0; + if (netif_carrier_ok(netdev) && netif_running(netdev)) { + props->state = IB_PORT_ACTIVE; + props->phys_state = IB_PORT_PHYS_STATE_LINK_UP; + } else { + props->state = IB_PORT_DOWN; + props->phys_state = IB_PORT_PHYS_STATE_DISABLED; + } + irdma_get_eth_speed_and_width(SPEED_100000, &props->active_speed, + &props->active_width); + + if (rdma_protocol_roce(ibdev, 1)) { + props->gid_tbl_len = 32; + props->ip_gids = true; + } else { + props->gid_tbl_len = 1; + } + props->pkey_tbl_len = IRDMA_PKEY_TBL_SZ; + props->qkey_viol_cntr = 0; + props->port_cap_flags |= IB_PORT_CM_SUP | IB_PORT_REINIT_SUP; + props->max_msg_sz = iwdev->rf->sc_dev.hw_attrs.max_hw_outbound_msg_size; + + return 0; +} + +/** + * irdma_disassociate_ucontext - Disassociate user context + * @context: ib user context + */ +static void irdma_disassociate_ucontext(struct ib_ucontext *context) +{ +} + +static int irdma_mmap_legacy(struct irdma_ucontext *ucontext, + struct vm_area_struct *vma) +{ + u64 dbaddr_pgoff, pfn; + + dbaddr_pgoff = (uintptr_t)ucontext->iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET] + >> PAGE_SHIFT; + vma->vm_private_data = ucontext; + pfn = dbaddr_pgoff + (pci_resource_start(ucontext->iwdev->rf->pdev, 0) + >> PAGE_SHIFT); + + return rdma_user_mmap_io(&ucontext->ibucontext, vma, pfn, PAGE_SIZE, + pgprot_noncached(vma->vm_page_prot), NULL); +} + +static void irdma_mmap_free(struct rdma_user_mmap_entry *rdma_entry) +{ + struct irdma_user_mmap_entry *entry = to_irdma_mmap_entry(rdma_entry); + + kfree(entry); +} + +static struct rdma_user_mmap_entry* +irdma_user_mmap_entry_insert(struct irdma_ucontext *ucontext, u64 bar_offset, + enum irdma_mmap_flag mmap_flag, u64 *mmap_offset) +{ + struct irdma_user_mmap_entry *entry = kzalloc(sizeof(*entry), GFP_KERNEL); + int ret; + + if (!entry) + return NULL; + + entry->bar_offset = bar_offset; + entry->mmap_flag = mmap_flag; + + ret = rdma_user_mmap_entry_insert(&ucontext->ibucontext, + &entry->rdma_entry, PAGE_SIZE); + if (ret) { + kfree(entry); + return NULL; + } + *mmap_offset = rdma_user_mmap_get_offset(&entry->rdma_entry); + + return &entry->rdma_entry; +} + +/** + * irdma_mmap - user memory map + * @context: context created during alloc + * @vma: kernel info for user memory map + */ +static int irdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) +{ + struct rdma_user_mmap_entry *rdma_entry; + struct irdma_user_mmap_entry *entry; + struct irdma_ucontext *ucontext; + u64 pfn; + int ret; + + ucontext = to_ucontext(context); + + /* Legacy support for libi40iw with hard-coded mmap key */ + if (ucontext->abi_ver <= 5) + return irdma_mmap_legacy(ucontext, vma); + + rdma_entry = rdma_user_mmap_entry_get(&ucontext->ibucontext, vma); + if (!rdma_entry) { + ibdev_dbg(to_ibdev(ucontext->iwdev), + "VERBS: pgoff[0x%lx] does not have valid entry\n", + vma->vm_pgoff); + return -EINVAL; + } + + entry = to_irdma_mmap_entry(rdma_entry); + ibdev_dbg(to_ibdev(ucontext->iwdev), + "VERBS: bar_offset[0x%llx] mmap_flag [%d]\n", + entry->bar_offset, entry->mmap_flag); + + pfn = (entry->bar_offset + + pci_resource_start(ucontext->iwdev->rf->pdev, 0)) >> PAGE_SHIFT; + + switch (entry->mmap_flag) { + case IRDMA_MMAP_IO_NC: + ret = rdma_user_mmap_io(context, vma, pfn, PAGE_SIZE, + pgprot_noncached(vma->vm_page_prot), + rdma_entry); + break; + case IRDMA_MMAP_IO_WC: + ret = rdma_user_mmap_io(context, vma, pfn, PAGE_SIZE, + pgprot_writecombine(vma->vm_page_prot), + rdma_entry); + break; + default: + ret = -EINVAL; + } + + if (ret) + ibdev_dbg(to_ibdev(ucontext->iwdev), + "VERBS: bar_offset [0x%llx] mmap_flag[%d] err[%d]\n", + entry->bar_offset, entry->mmap_flag, ret); + + rdma_user_mmap_entry_put(rdma_entry); + + return ret; +} + +/** + * irdma_alloc_push_page - allocate a push page for qp + * @iwqp: qp pointer + */ +static void irdma_alloc_push_page(struct irdma_qp *iwqp) +{ + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_sc_qp *qp = &iwqp->sc_qp; + enum irdma_status_code status; + + if (qp->push_idx != IRDMA_INVALID_PUSH_PAGE_INDEX) + return; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return; + + refcount_inc(&cqp_request->refcnt); + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_MANAGE_PUSH_PAGE; + cqp_info->post_sq = 1; + cqp_info->in.u.manage_push_page.info.push_idx = 0; + cqp_info->in.u.manage_push_page.info.qs_handle = + qp->vsi->qos[qp->user_pri].qs_handle; + cqp_info->in.u.manage_push_page.info.free_page = 0; + cqp_info->in.u.manage_push_page.info.push_page_type = 0; + cqp_info->in.u.manage_push_page.cqp = &iwdev->rf->cqp.sc_cqp; + cqp_info->in.u.manage_push_page.scratch = (uintptr_t)cqp_request; + + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (!status) { + qp->push_idx = cqp_request->compl_info.op_ret_val; + qp->push_offset = 0; + } else { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP Push page fail"); + } + + irdma_put_cqp_request(&iwdev->rf->cqp, cqp_request); +} + +/** + * irdma_alloc_ucontext - Allocate the user context data structure + * @uctx: uverbs context pointer + * @udata: user data + * + * This keeps track of all objects associated with a particular + * user-mode client. + */ +static int irdma_alloc_ucontext(struct ib_ucontext *uctx, + struct ib_udata *udata) +{ + struct ib_device *ibdev = uctx->device; + struct irdma_device *iwdev = to_iwdev(ibdev); + struct irdma_alloc_ucontext_req req; + struct irdma_alloc_ucontext_resp uresp = {}; + struct i40iw_alloc_ucontext_resp uresp_gen1 = {}; + struct irdma_ucontext *ucontext = to_ucontext(uctx); + struct irdma_uk_attrs *uk_attrs; + + if (ib_copy_from_udata(&req, udata, min(sizeof(req), udata->inlen))) + return -EINVAL; + + if (req.userspace_ver > IRDMA_ABI_VER) + goto ver_error; + + ucontext->iwdev = iwdev; + ucontext->abi_ver = req.userspace_ver; + + uk_attrs = &iwdev->rf->sc_dev.hw_attrs.uk_attrs; + /* GEN_1 legacy support with libi40iw */ + if (req.userspace_ver <= 5) { + if (uk_attrs->hw_rev != IRDMA_GEN_1) + goto ver_error; + + uresp_gen1.max_qps = iwdev->rf->max_qp; + uresp_gen1.max_pds = iwdev->rf->sc_dev.hw_attrs.max_hw_pds; + uresp_gen1.wq_size = iwdev->rf->sc_dev.hw_attrs.max_qp_wr * 2; + uresp_gen1.kernel_ver = req.userspace_ver; + if (ib_copy_to_udata(udata, &uresp_gen1, + min(sizeof(uresp_gen1), udata->outlen))) + return -EFAULT; + } else { + u64 bar_off = + (uintptr_t)iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET]; + ucontext->db_mmap_entry = + irdma_user_mmap_entry_insert(ucontext, bar_off, + IRDMA_MMAP_IO_NC, + &uresp.db_mmap_key); + + if (!ucontext->db_mmap_entry) + return -ENOMEM; + + uresp.kernel_ver = req.userspace_ver; + uresp.feature_flags = uk_attrs->feature_flags; + uresp.max_hw_wq_frags = uk_attrs->max_hw_wq_frags; + uresp.max_hw_read_sges = uk_attrs->max_hw_read_sges; + uresp.max_hw_inline = uk_attrs->max_hw_inline; + uresp.max_hw_rq_quanta = uk_attrs->max_hw_rq_quanta; + uresp.max_hw_wq_quanta = uk_attrs->max_hw_wq_quanta; + uresp.max_hw_sq_chunk = uk_attrs->max_hw_sq_chunk; + uresp.max_hw_cq_size = uk_attrs->max_hw_cq_size; + uresp.min_hw_cq_size = uk_attrs->min_hw_cq_size; + uresp.hw_rev = uk_attrs->hw_rev; + if (ib_copy_to_udata(udata, &uresp, + min(sizeof(uresp), udata->outlen))) + return -EFAULT; + } + + INIT_LIST_HEAD(&ucontext->cq_reg_mem_list); + spin_lock_init(&ucontext->cq_reg_mem_list_lock); + INIT_LIST_HEAD(&ucontext->qp_reg_mem_list); + spin_lock_init(&ucontext->qp_reg_mem_list_lock); + + return 0; + +ver_error: + dev_err(rfdev_to_dev(&iwdev->rf->sc_dev), + "Invalid userspace driver version detected. Detected version %d, should be %d\n", + req.userspace_ver, IRDMA_ABI_VER); + uresp.kernel_ver = IRDMA_ABI_VER; + return -EINVAL; +} + +/** + * irdma_dealloc_ucontext - deallocate the user context data structure + * @context: user context created during alloc + */ +static void irdma_dealloc_ucontext(struct ib_ucontext *context) +{ + struct irdma_ucontext *ucontext = to_ucontext(context); + + if (ucontext->db_mmap_entry) + rdma_user_mmap_entry_remove(ucontext->db_mmap_entry); +} + +/** + * irdma_alloc_pd - allocate protection domain + * @pd: PD pointer + * @udata: user data + */ +static int irdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata) +{ + struct irdma_pd *iwpd = to_iwpd(pd); + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_alloc_pd_resp uresp = {}; + struct irdma_sc_pd *sc_pd; + u32 pd_id = 0; + int err; + + err = irdma_alloc_rsrc(rf, rf->allocated_pds, rf->max_pd, &pd_id, + &rf->next_pd); + if (err) + return err; + + sc_pd = &iwpd->sc_pd; + if (udata) { + struct irdma_ucontext *ucontext = + rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + dev->iw_pd_ops->pd_init(dev, sc_pd, pd_id, ucontext->abi_ver); + uresp.pd_id = pd_id; + if (ib_copy_to_udata(udata, &uresp, + min(sizeof(uresp), udata->outlen))) { + err = -EFAULT; + goto error; + } + } else { + dev->iw_pd_ops->pd_init(dev, sc_pd, pd_id, IRDMA_ABI_VER); + } + + return 0; +error: + irdma_free_rsrc(rf, rf->allocated_pds, pd_id); + + return err; +} + +/** + * irdma_dealloc_pd - deallocate pd + * @ibpd: ptr of pd to be deallocated + * @udata: user data + */ +static void irdma_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct irdma_pd *iwpd = to_iwpd(ibpd); + struct irdma_device *iwdev = to_iwdev(ibpd->device); + + irdma_free_rsrc(iwdev->rf, iwdev->rf->allocated_pds, iwpd->sc_pd.pd_id); +} + +/** + * irdma_get_pbl - Retrieve pbl from a list given a virtual + * address + * @va: user virtual address + * @pbl_list: pbl list to search in (QP's or CQ's) + */ +static struct irdma_pbl *irdma_get_pbl(unsigned long va, + struct list_head *pbl_list) +{ + struct irdma_pbl *iwpbl; + + list_for_each_entry (iwpbl, pbl_list, list) { + if (iwpbl->user_base == va) { + list_del(&iwpbl->list); + iwpbl->on_list = false; + return iwpbl; + } + } + + return NULL; +} + +/** + * irdma_clean_cqes - clean cq entries for qp + * @iwqp: qp ptr (user or kernel) + * @iwcq: cq ptr + */ +static void irdma_clean_cqes(struct irdma_qp *iwqp, struct irdma_cq *iwcq) +{ + struct irdma_cq_uk *ukcq = &iwcq->sc_cq.cq_uk; + unsigned long flags; + + spin_lock_irqsave(&iwcq->lock, flags); + ukcq->ops.iw_cq_clean(&iwqp->sc_qp.qp_uk, ukcq); + spin_unlock_irqrestore(&iwcq->lock, flags); +} + +static void irdma_remove_push_mmap_entries(struct irdma_qp *iwqp) +{ + if (iwqp->push_db_mmap_entry) + rdma_user_mmap_entry_remove(iwqp->push_wqe_mmap_entry); + if (iwqp->push_wqe_mmap_entry) + rdma_user_mmap_entry_remove(iwqp->push_db_mmap_entry); +} + +static int irdma_setup_push_mmap_entries(struct irdma_ucontext *ucontext, + struct irdma_qp *iwqp, + u64 *push_wqe_mmap_key, + u64 *push_db_mmap_key) +{ + struct irdma_device *iwdev = ucontext->iwdev; + u64 rsvd, bar_off; + + rsvd = (iwdev->rf->ldev.ftype ? IRDMA_VF_BAR_RSVD : IRDMA_PF_BAR_RSVD); + bar_off = (uintptr_t)iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET]; + /* skip over db page */ + bar_off += IRDMA_HW_PAGE_SIZE; + /* push wqe page */ + bar_off += rsvd + iwqp->sc_qp.push_idx * IRDMA_HW_PAGE_SIZE; + iwqp->push_wqe_mmap_entry = irdma_user_mmap_entry_insert(ucontext, + bar_off, IRDMA_MMAP_IO_WC, + push_wqe_mmap_key); + if (!iwqp->push_wqe_mmap_entry) + return -ENOMEM; + + /* push doorbell page */ + bar_off += IRDMA_HW_PAGE_SIZE; + iwqp->push_db_mmap_entry = irdma_user_mmap_entry_insert(ucontext, + bar_off, IRDMA_MMAP_IO_NC, + push_db_mmap_key); + + if (!iwqp->push_db_mmap_entry) { + rdma_user_mmap_entry_remove(iwqp->push_wqe_mmap_entry); + return -ENOMEM; + } + + return 0; +} + +/** + * irdma_destroy_qp - destroy qp + * @ibqp: qp's ib pointer also to get to device's qp address + * @udata: user data + */ +static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + + iwqp->destroyed = 1; + if (iwqp->ibqp_state >= IB_QPS_INIT && iwqp->ibqp_state < IB_QPS_RTS) + irdma_next_iw_state(iwqp, IRDMA_QP_STATE_ERROR, 0, 0, 0); + + if (!iwqp->user_mode) { + if (iwqp->iwscq) { + irdma_clean_cqes(iwqp, iwqp->iwscq); + if (iwqp->iwrcq != iwqp->iwscq) + irdma_clean_cqes(iwqp, iwqp->iwrcq); + } + } + + irdma_remove_push_mmap_entries(iwqp); + irdma_free_lsmm_rsrc(iwqp); + irdma_rem_ref(&iwqp->ibqp); + + return 0; +} + +/** + * irdma_setup_virt_qp - setup for allocation of virtual qp + * @iwdev: irdma device + * @iwqp: qp ptr + * @init_info: initialize info to return + */ +static int irdma_setup_virt_qp(struct irdma_device *iwdev, + struct irdma_qp *iwqp, + struct irdma_qp_init_info *init_info) +{ + struct irdma_pbl *iwpbl = iwqp->iwpbl; + struct irdma_qp_mr *qpmr = &iwpbl->qp_mr; + + iwqp->page = qpmr->sq_page; + init_info->shadow_area_pa = qpmr->shadow; + if (iwpbl->pbl_allocated) { + init_info->virtual_map = true; + init_info->sq_pa = qpmr->sq_pbl.idx; + init_info->rq_pa = qpmr->rq_pbl.idx; + } else { + init_info->sq_pa = qpmr->sq_pbl.addr; + init_info->rq_pa = qpmr->rq_pbl.addr; + } + + return 0; +} + +/** + * irdma_setup_kmode_qp - setup initialization for kernel mode qp + * @iwdev: iwarp device + * @iwqp: qp ptr (user or kernel) + * @info: initialize info to return + * @init_attr: Initial QP create attributes + */ +static int irdma_setup_kmode_qp(struct irdma_device *iwdev, + struct irdma_qp *iwqp, + struct irdma_qp_init_info *info, + struct ib_qp_init_attr *init_attr) +{ + struct irdma_dma_mem *mem = &iwqp->kqp.dma_mem; + u32 sqdepth, rqdepth; + u8 sqshift, rqshift; + u32 size; + enum irdma_status_code status; + struct irdma_qp_uk_init_info *ukinfo = &info->qp_uk_init_info; + struct irdma_uk_attrs *uk_attrs = &iwdev->rf->sc_dev.hw_attrs.uk_attrs; + + irdma_get_wqe_shift(uk_attrs, + uk_attrs->hw_rev > IRDMA_GEN_1 ? ukinfo->max_sq_frag_cnt + 1 : + ukinfo->max_sq_frag_cnt, + ukinfo->max_inline_data, &sqshift); + status = irdma_get_sqdepth(uk_attrs, ukinfo->sq_size, sqshift, + &sqdepth); + if (status) + return -ENOMEM; + + if (uk_attrs->hw_rev == IRDMA_GEN_1) + rqshift = IRDMA_MAX_RQ_WQE_SHIFT_GEN1; + else + irdma_get_wqe_shift(uk_attrs, ukinfo->max_rq_frag_cnt, 0, + &rqshift); + + status = irdma_get_rqdepth(uk_attrs, ukinfo->rq_size, rqshift, + &rqdepth); + if (status) + return -ENOMEM; + + iwqp->kqp.sq_wrid_mem = + kcalloc(sqdepth, sizeof(*iwqp->kqp.sq_wrid_mem), GFP_KERNEL); + if (!iwqp->kqp.sq_wrid_mem) + return -ENOMEM; + + iwqp->kqp.rq_wrid_mem = + kcalloc(rqdepth, sizeof(*iwqp->kqp.rq_wrid_mem), GFP_KERNEL); + if (!iwqp->kqp.rq_wrid_mem) { + kfree(iwqp->kqp.sq_wrid_mem); + iwqp->kqp.sq_wrid_mem = NULL; + return -ENOMEM; + } + + ukinfo->sq_wrtrk_array = iwqp->kqp.sq_wrid_mem; + ukinfo->rq_wrid_array = iwqp->kqp.rq_wrid_mem; + + size = (sqdepth + rqdepth) * IRDMA_QP_WQE_MIN_SIZE; + size += (IRDMA_SHADOW_AREA_SIZE << 3); + + mem->size = ALIGN(size, 256); + mem->va = dma_alloc_coherent(hw_to_dev(iwdev->rf->sc_dev.hw), + mem->size, &mem->pa, GFP_KERNEL); + if (!mem->va) { + kfree(iwqp->kqp.sq_wrid_mem); + iwqp->kqp.sq_wrid_mem = NULL; + kfree(iwqp->kqp.rq_wrid_mem); + iwqp->kqp.rq_wrid_mem = NULL; + return -ENOMEM; + } + + ukinfo->sq = mem->va; + info->sq_pa = mem->pa; + ukinfo->rq = &ukinfo->sq[sqdepth]; + info->rq_pa = info->sq_pa + (sqdepth * IRDMA_QP_WQE_MIN_SIZE); + ukinfo->shadow_area = ukinfo->rq[rqdepth].elem; + info->shadow_area_pa = info->rq_pa + (rqdepth * IRDMA_QP_WQE_MIN_SIZE); + ukinfo->sq_size = sqdepth >> sqshift; + ukinfo->rq_size = rqdepth >> rqshift; + ukinfo->qp_id = iwqp->ibqp.qp_num; + + init_attr->cap.max_send_wr = (sqdepth - IRDMA_SQ_RSVD) >> sqshift; + init_attr->cap.max_recv_wr = (rqdepth - IRDMA_RQ_RSVD) >> rqshift; + + return 0; +} + +/** + * irdma_roce_mtu - set MTU to supported path MTU values + * @mtu: MTU + */ +static u32 irdma_roce_mtu(u32 mtu) +{ + if (mtu > 4096) + return 4096; + else if (mtu > 2048) + return 2048; + else if (mtu > 1024) + return 1024; + else if (mtu > 512) + return 512; + else + return 256; +} + +/** + * irdma_create_qp - create qp + * @ibpd: ptr of pd + * @init_attr: attributes for qp + * @udata: user data for create qp + */ +static struct ib_qp *irdma_create_qp(struct ib_pd *ibpd, + struct ib_qp_init_attr *init_attr, + struct ib_udata *udata) +{ + struct irdma_pd *iwpd = to_iwpd(ibpd); + struct irdma_device *iwdev = to_iwdev(ibpd->device); + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_cqp *iwcqp = &rf->cqp; + struct irdma_qp *iwqp; + struct irdma_create_qp_req req; + struct irdma_create_qp_resp uresp = {}; + struct i40iw_create_qp_resp uresp_gen1 = {}; + u32 qp_num = 0; + void *mem; + enum irdma_status_code ret; + int err_code = 0; + int sq_size; + int rq_size; + struct irdma_sc_qp *qp; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_uk_attrs *uk_attrs = &dev->hw_attrs.uk_attrs; + struct irdma_qp_init_info init_info = {}; + struct irdma_create_qp_info *qp_info; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_qp_host_ctx_info *ctx_info; + struct irdma_iwarp_offload_info *iwarp_info; + struct irdma_roce_offload_info *roce_info; + struct irdma_udp_offload_info *udp_info; + unsigned long flags; + + if (init_attr->create_flags || + init_attr->cap.max_inline_data > uk_attrs->max_hw_inline || + init_attr->cap.max_send_sge > uk_attrs->max_hw_wq_frags || + init_attr->cap.max_recv_sge > uk_attrs->max_hw_wq_frags) + return ERR_PTR(-EINVAL); + + sq_size = init_attr->cap.max_send_wr; + rq_size = init_attr->cap.max_recv_wr; + + init_info.vsi = &iwdev->vsi; + init_info.qp_uk_init_info.uk_attrs = uk_attrs; + init_info.qp_uk_init_info.sq_size = sq_size; + init_info.qp_uk_init_info.rq_size = rq_size; + init_info.qp_uk_init_info.max_sq_frag_cnt = init_attr->cap.max_send_sge; + init_info.qp_uk_init_info.max_rq_frag_cnt = init_attr->cap.max_recv_sge; + init_info.qp_uk_init_info.max_inline_data = init_attr->cap.max_inline_data; + + mem = kzalloc(sizeof(*iwqp), GFP_KERNEL); + if (!mem) + return ERR_PTR(-ENOMEM); + + iwqp = mem; + qp = &iwqp->sc_qp; + qp->qp_uk.back_qp = (void *)iwqp; + qp->qp_uk.lock = &iwqp->lock; + qp->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + + iwqp->q2_ctx_mem.size = ALIGN(IRDMA_Q2_BUF_SIZE + IRDMA_QP_CTX_SIZE, + 256); + iwqp->q2_ctx_mem.va = dma_alloc_coherent(hw_to_dev(dev->hw), + iwqp->q2_ctx_mem.size, + &iwqp->q2_ctx_mem.pa, + GFP_KERNEL); + if (!iwqp->q2_ctx_mem.va) { + err_code = -ENOMEM; + goto error; + } + + init_info.q2 = iwqp->q2_ctx_mem.va; + init_info.q2_pa = iwqp->q2_ctx_mem.pa; + init_info.host_ctx = (void *)init_info.q2 + IRDMA_Q2_BUF_SIZE; + init_info.host_ctx_pa = init_info.q2_pa + IRDMA_Q2_BUF_SIZE; + + if (init_attr->qp_type == IB_QPT_GSI && !rf->ldev.ftype) + qp_num = 1; + else + err_code = irdma_alloc_rsrc(rf, rf->allocated_qps, rf->max_qp, + &qp_num, &rf->next_qp); + if (err_code) + goto error; + + iwqp->iwdev = iwdev; + iwqp->iwpd = iwpd; + if (init_attr->qp_type == IB_QPT_GSI && !rf->ldev.ftype) + iwqp->ibqp.qp_num = 1; + else + iwqp->ibqp.qp_num = qp_num; + + qp = &iwqp->sc_qp; + iwqp->iwscq = to_iwcq(init_attr->send_cq); + iwqp->iwrcq = to_iwcq(init_attr->recv_cq); + iwqp->host_ctx.va = init_info.host_ctx; + iwqp->host_ctx.pa = init_info.host_ctx_pa; + iwqp->host_ctx.size = IRDMA_QP_CTX_SIZE; + + init_info.pd = &iwpd->sc_pd; + init_info.qp_uk_init_info.qp_id = iwqp->ibqp.qp_num; + if (!rdma_protocol_roce(&iwdev->ibdev, 1)) + init_info.qp_uk_init_info.first_sq_wq = 1; + iwqp->ctx_info.qp_compl_ctx = (uintptr_t)qp; + init_waitqueue_head(&iwqp->waitq); + init_waitqueue_head(&iwqp->mod_qp_waitq); + + if (rdma_protocol_roce(&iwdev->ibdev, 1)) { + if (init_attr->qp_type != IB_QPT_RC && + init_attr->qp_type != IB_QPT_UD && + init_attr->qp_type != IB_QPT_GSI) { + err_code = -EINVAL; + goto error; + } + } else { + if (init_attr->qp_type != IB_QPT_RC) { + err_code = -EINVAL; + goto error; + } + } + if (udata) { + err_code = ib_copy_from_udata(&req, udata, + min(sizeof(req), udata->inlen)); + if (err_code) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: ib_copy_from_data fail\n"); + goto error; + } + + iwqp->ctx_info.qp_compl_ctx = req.user_compl_ctx; + iwqp->user_mode = 1; + if (req.user_wqe_bufs) { + struct irdma_ucontext *ucontext = + rdma_udata_to_drv_context(udata, + struct irdma_ucontext, + ibucontext); + spin_lock_irqsave(&ucontext->qp_reg_mem_list_lock, flags); + iwqp->iwpbl = irdma_get_pbl((unsigned long)req.user_wqe_bufs, + &ucontext->qp_reg_mem_list); + spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); + + if (!iwqp->iwpbl) { + err_code = -ENODATA; + ibdev_dbg(to_ibdev(iwdev), + "VERBS: no pbl info\n"); + goto error; + } + } + init_info.qp_uk_init_info.abi_ver = iwpd->sc_pd.abi_ver; + err_code = irdma_setup_virt_qp(iwdev, iwqp, &init_info); + } else { + init_info.qp_uk_init_info.abi_ver = IRDMA_ABI_VER; + err_code = irdma_setup_kmode_qp(iwdev, iwqp, &init_info, init_attr); + } + + if (err_code) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: setup qp failed\n"); + goto error; + } + + if (rdma_protocol_roce(&iwdev->ibdev, 1)) { + if (init_attr->qp_type == IB_QPT_RC) { + init_info.type = IRDMA_QP_TYPE_ROCE_RC; + init_info.qp_uk_init_info.qp_caps = IRDMA_SEND_WITH_IMM | + IRDMA_WRITE_WITH_IMM | + IRDMA_ROCE; + } else { + init_info.type = IRDMA_QP_TYPE_ROCE_UD; + init_info.qp_uk_init_info.qp_caps = IRDMA_SEND_WITH_IMM | + IRDMA_ROCE; + } + } else { + init_info.type = IRDMA_QP_TYPE_IWARP; + init_info.qp_uk_init_info.qp_caps = IRDMA_WRITE_WITH_IMM; + } + + ret = dev->iw_priv_qp_ops->qp_init(qp, &init_info); + if (ret) { + err_code = -EPROTO; + ibdev_dbg(to_ibdev(iwdev), "VERBS: qp_init fail\n"); + goto error; + } + + ctx_info = &iwqp->ctx_info; + if (rdma_protocol_roce(&iwdev->ibdev, 1)) { + iwqp->ctx_info.roce_info = &iwqp->roce_info; + iwqp->ctx_info.udp_info = &iwqp->udp_info; + udp_info = &iwqp->udp_info; + udp_info->snd_mss = irdma_roce_mtu(iwdev->vsi.mtu); + udp_info->cwnd = 0x400; + udp_info->src_port = 0xc000; + udp_info->dst_port = ROCE_V2_UDP_DPORT; + roce_info = &iwqp->roce_info; + ether_addr_copy(roce_info->mac_addr, iwdev->netdev->dev_addr); + + if (init_attr->qp_type == IB_QPT_GSI && !rf->sc_dev.privileged) + roce_info->is_qp1 = true; + roce_info->rd_en = true; + roce_info->wr_rdresp_en = true; + roce_info->dcqcn_en = true; + + roce_info->ack_credits = 0x1E; + roce_info->ird_size = IRDMA_MAX_ENCODED_IRD_SIZE; + roce_info->ord_size = dev->hw_attrs.max_hw_ord; + + if (!iwqp->user_mode) { + roce_info->priv_mode_en = true; + roce_info->fast_reg_en = true; + roce_info->udprivcq_en = true; + } + roce_info->roce_tver = 0; + } else { + iwqp->ctx_info.iwarp_info = &iwqp->iwarp_info; + iwarp_info = &iwqp->iwarp_info; + ether_addr_copy(iwarp_info->mac_addr, iwdev->netdev->dev_addr); + iwarp_info->rd_en = true; + iwarp_info->wr_rdresp_en = true; + iwarp_info->ecn_en = true; + + if (dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + iwarp_info->ib_rd_en = true; + if (!iwqp->user_mode) { + iwarp_info->priv_mode_en = true; + iwarp_info->fast_reg_en = true; + } + iwarp_info->ddp_ver = 1; + iwarp_info->rdmap_ver = 1; + ctx_info->iwarp_info_valid = true; + } + ctx_info->send_cq_num = iwqp->iwscq->sc_cq.cq_uk.cq_id; + ctx_info->rcv_cq_num = iwqp->iwrcq->sc_cq.cq_uk.cq_id; + if (rdma_protocol_roce(&iwdev->ibdev, 1)) { + ret = dev->iw_priv_qp_ops->qp_setctx_roce(&iwqp->sc_qp, + iwqp->host_ctx.va, + ctx_info); + } else { + ret = dev->iw_priv_qp_ops->qp_setctx(&iwqp->sc_qp, + iwqp->host_ctx.va, + ctx_info); + ctx_info->iwarp_info_valid = false; + } + + cqp_request = irdma_get_cqp_request(iwcqp, true); + if (!cqp_request) { + err_code = -ENOMEM; + goto error; + } + + cqp_info = &cqp_request->info; + qp_info = &cqp_request->info.in.u.qp_create.info; + memset(qp_info, 0, sizeof(*qp_info)); + qp_info->mac_valid = true; + qp_info->cq_num_valid = true; + qp_info->next_iwarp_state = IRDMA_QP_STATE_IDLE; + + cqp_info->cqp_cmd = IRDMA_OP_QP_CREATE; + cqp_info->post_sq = 1; + cqp_info->in.u.qp_create.qp = qp; + cqp_info->in.u.qp_create.scratch = (uintptr_t)cqp_request; + ret = irdma_handle_cqp_op(rf, cqp_request); + if (ret) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP QP create fail"); + err_code = -ENOMEM; + goto error; + } + + refcount_set(&iwqp->refcnt, 1); + spin_lock_init(&iwqp->lock); + spin_lock_init(&iwqp->sc_qp.pfpdu.lock); + iwqp->sig_all = (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) ? 1 : 0; + rf->qp_table[qp_num] = iwqp; + iwqp->max_send_wr = sq_size; + iwqp->max_recv_wr = rq_size; + if (udata) { + /* GEN_1 legacy support with libi40iw */ + if (iwpd->sc_pd.abi_ver <= 5) { + uresp_gen1.lsmm = 1; + uresp_gen1.actual_sq_size = sq_size; + uresp_gen1.actual_rq_size = rq_size; + uresp_gen1.qp_id = qp_num; + uresp_gen1.push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + uresp_gen1.lsmm = 1; + err_code = ib_copy_to_udata(udata, &uresp_gen1, + min(sizeof(uresp_gen1), udata->outlen)); + } else { + if (rdma_protocol_iwarp(&iwdev->ibdev, 1)) + uresp.lsmm = 1; + uresp.actual_sq_size = sq_size; + uresp.actual_rq_size = rq_size; + uresp.qp_id = qp_num; + uresp.qp_caps = qp->qp_uk.qp_caps; + + err_code = ib_copy_to_udata(udata, &uresp, + min(sizeof(uresp), udata->outlen)); + } + if (err_code) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: copy_to_udata failed\n"); + irdma_destroy_qp(&iwqp->ibqp, udata); + return ERR_PTR(err_code); + } + } + init_completion(&iwqp->sq_drained); + init_completion(&iwqp->rq_drained); + return &iwqp->ibqp; + +error: + irdma_free_qp_rsrc(iwdev, iwqp, qp_num); + + return ERR_PTR(err_code); +} + +/** + * irdma_query - query qp attributes + * @ibqp: qp pointer + * @attr: attributes pointer + * @attr_mask: Not used + * @init_attr: qp attributes to return + */ +static int irdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_qp_init_attr *init_attr) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_sc_qp *qp = &iwqp->sc_qp; + + attr->qp_state = iwqp->ibqp_state; + attr->cur_qp_state = iwqp->ibqp_state; + attr->qp_access_flags = 0; + attr->cap.max_send_wr = iwqp->max_send_wr; + attr->cap.max_recv_wr = iwqp->max_recv_wr; + attr->cap.max_inline_data = qp->qp_uk.max_inline_data; + attr->cap.max_send_sge = qp->qp_uk.max_sq_frag_cnt; + attr->cap.max_recv_sge = qp->qp_uk.max_rq_frag_cnt; + attr->qkey = iwqp->roce_info.qkey; + + init_attr->event_handler = iwqp->ibqp.event_handler; + init_attr->qp_context = iwqp->ibqp.qp_context; + init_attr->send_cq = iwqp->ibqp.send_cq; + init_attr->recv_cq = iwqp->ibqp.recv_cq; + init_attr->cap = attr->cap; + + return 0; +} + +/** + * irdma_query_pkey - Query partition key + * @ibdev: device pointer from stack + * @port: port number + * @index: index of pkey + * @pkey: pointer to store the pkey + */ +static int irdma_query_pkey(struct ib_device *ibdev, u8 port, u16 index, + u16 *pkey) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + + if (index >= IRDMA_PKEY_TBL_SZ) + return -EINVAL; + + if (rdma_protocol_roce(&iwdev->ibdev, 1)) + *pkey = IRDMA_DEFAULT_PKEY; + else + *pkey = 0; + + return 0; +} + +/** + * irdma_modify_qp_roce - modify qp request + * @ibqp: qp's pointer for modify + * @attr: access attributes + * @attr_mask: state mask + * @udata: user data + */ +int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata) +{ + struct irdma_pd *iwpd = to_iwpd(ibqp->pd); + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + struct irdma_qp_host_ctx_info *ctx_info; + struct irdma_roce_offload_info *roce_info; + struct irdma_udp_offload_info *udp_info; + struct irdma_modify_qp_info info = {}; + struct irdma_modify_qp_resp uresp = {}; + struct irdma_modify_qp_req ureq = {}; + unsigned long flags; + u8 issue_modify_qp = 0; + int ret = 0; + + ctx_info = &iwqp->ctx_info; + roce_info = &iwqp->roce_info; + udp_info = &iwqp->udp_info; + + if (attr_mask & IB_QP_DEST_QPN) + roce_info->dest_qp = attr->dest_qp_num; + + if (attr_mask & IB_QP_PKEY_INDEX) { + ret = irdma_query_pkey(ibqp->device, 0, attr->pkey_index, + &roce_info->p_key); + if (ret) + return ret; + } + + if (attr_mask & IB_QP_QKEY) + roce_info->qkey = attr->qkey; + + if (attr_mask & IB_QP_PORT) + iwqp->roce_ah.av.attrs.port_num = attr->ah_attr.port_num; + + if (attr_mask & IB_QP_PATH_MTU) { + const u16 path_mtu[] = {-1, 256, 512, 1024, 2048, 4096}; + + if (attr->path_mtu < IB_MTU_256 || + attr->path_mtu > IB_MTU_4096 || + iwdev->vsi.mtu <= path_mtu[attr->path_mtu]) { + dev_warn(rfdev_to_dev(dev), "Invalid MTU %d\n", + attr->path_mtu); + return -EINVAL; + } + + udp_info->snd_mss = path_mtu[attr->path_mtu]; + } + + if (attr_mask & IB_QP_SQ_PSN) { + udp_info->psn_nxt = attr->sq_psn; + udp_info->lsn = 0xffff; + udp_info->psn_una = attr->sq_psn; + udp_info->psn_max = attr->sq_psn; + } + + if (attr_mask & IB_QP_RQ_PSN) + udp_info->epsn = attr->rq_psn; + + if (attr_mask & IB_QP_RNR_RETRY) + udp_info->rnr_nak_thresh = attr->rnr_retry; + + if (attr_mask & IB_QP_RETRY_CNT) + udp_info->rexmit_thresh = attr->retry_cnt; + + ctx_info->roce_info->pd_id = iwpd->sc_pd.pd_id; + + if (attr_mask & IB_QP_AV) { + struct irdma_av *av = &iwqp->roce_ah.av; + const struct ib_gid_attr *sgid_attr; + u16 vlan_id = VLAN_N_VID; + u32 local_ip[4]; + + memset(&iwqp->roce_ah, 0, sizeof(iwqp->roce_ah)); + if (attr->ah_attr.ah_flags & IB_AH_GRH) { + udp_info->ttl = attr->ah_attr.grh.hop_limit; + udp_info->flow_label = attr->ah_attr.grh.flow_label; + udp_info->tos = attr->ah_attr.grh.traffic_class; + dev->ws_remove(iwqp->sc_qp.vsi, ctx_info->user_pri); + ctx_info->user_pri = rt_tos2priority(udp_info->tos); + iwqp->sc_qp.user_pri = ctx_info->user_pri; + if (dev->ws_add(iwqp->sc_qp.vsi, ctx_info->user_pri)) + return -ENOMEM; + irdma_qp_add_qos(&iwqp->sc_qp); + } + sgid_attr = attr->ah_attr.grh.sgid_attr; + ret = rdma_read_gid_l2_fields(sgid_attr, &vlan_id, + ctx_info->roce_info->mac_addr); + if (ret) + return ret; + + if (vlan_id >= VLAN_N_VID && iwdev->dcb) + vlan_id = 0; + if (vlan_id < VLAN_N_VID) { + udp_info->insert_vlan_tag = true; + udp_info->vlan_tag = vlan_id | + ctx_info->user_pri << VLAN_PRIO_SHIFT; + } else { + udp_info->insert_vlan_tag = false; + } + + av->attrs = attr->ah_attr; + av->attrs.port_num = attr->ah_attr.port_num; + rdma_gid2ip(&av->sgid_addr.saddr, &sgid_attr->gid); + rdma_gid2ip(&av->dgid_addr.saddr, &attr->ah_attr.grh.dgid); + roce_info->local_qp = ibqp->qp_num; + if (av->sgid_addr.saddr.sa_family == AF_INET6) { + __be32 *daddr = + av->dgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32; + __be32 *saddr = + av->sgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32; + + irdma_copy_ip_ntohl(&udp_info->dest_ip_addr0, daddr); + irdma_copy_ip_ntohl(&udp_info->local_ipaddr0, saddr); + + udp_info->ipv4 = false; + irdma_copy_ip_ntohl(local_ip, daddr); + + udp_info->arp_idx = irdma_arp_table(iwdev->rf, + &local_ip[0], + false, NULL, + IRDMA_ARP_RESOLVE); + } else { + __be32 saddr = av->sgid_addr.saddr_in.sin_addr.s_addr; + __be32 daddr = av->dgid_addr.saddr_in.sin_addr.s_addr; + + local_ip[0] = ntohl(daddr); + + udp_info->ipv4 = true; + udp_info->dest_ip_addr0 = 0; + udp_info->dest_ip_addr1 = 0; + udp_info->dest_ip_addr2 = 0; + udp_info->dest_ip_addr3 = local_ip[0]; + + udp_info->local_ipaddr0 = 0; + udp_info->local_ipaddr1 = 0; + udp_info->local_ipaddr2 = 0; + udp_info->local_ipaddr3 = ntohl(saddr); + } + udp_info->arp_idx = + irdma_add_arp(iwdev->rf, local_ip, udp_info->ipv4, + attr->ah_attr.roce.dmac); + } + + if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC) { + if (attr->max_rd_atomic > dev->hw_attrs.max_hw_ord) { + dev_err(rfdev_to_dev(dev), + "rd_atomic = %d, above max_hw_ord=%d\n", + attr->max_rd_atomic, dev->hw_attrs.max_hw_ord); + return -EINVAL; + } + if (attr->max_rd_atomic) + roce_info->ord_size = attr->max_rd_atomic; + info.ord_valid = true; + } + + if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) { + if (attr->max_dest_rd_atomic > dev->hw_attrs.max_hw_ird) { + dev_err(rfdev_to_dev(dev), + "rd_atomic = %d, above max_hw_ird=%d\n", + attr->max_rd_atomic, dev->hw_attrs.max_hw_ird); + return -EINVAL; + } + if (attr->max_dest_rd_atomic) + roce_info->ird_size = irdma_derive_hw_ird_setting(attr->max_dest_rd_atomic); + } + + if (attr_mask & IB_QP_ACCESS_FLAGS) { + if (attr->qp_access_flags & IB_ACCESS_LOCAL_WRITE) + roce_info->wr_rdresp_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_WRITE) + roce_info->wr_rdresp_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_READ) + roce_info->rd_en = true; + if (attr->qp_access_flags & IB_ACCESS_MW_BIND) + roce_info->bind_en = true; + + if (iwqp->user_mode) { + roce_info->rd_en = true; + roce_info->wr_rdresp_en = true; + roce_info->priv_mode_en = false; + } + } + + wait_event(iwqp->mod_qp_waitq, !atomic_read(&iwqp->hw_mod_qp_pend)); + + spin_lock_irqsave(&iwqp->lock, flags); + if (attr_mask & IB_QP_STATE) { + if (!ib_modify_qp_is_ok(iwqp->ibqp_state, attr->qp_state, + iwqp->ibqp.qp_type, attr_mask)) { + dev_warn(rfdev_to_dev(dev), + "modify_qp invalid for qp_id=%d, old_state=0x%x, new_state=0x%x\n", + iwqp->ibqp.qp_num, iwqp->ibqp_state, + attr->qp_state); + ret = -EINVAL; + goto exit; + } + info.curr_iwarp_state = iwqp->iwarp_state; + + switch (attr->qp_state) { + case IB_QPS_INIT: + if (iwqp->iwarp_state > IRDMA_QP_STATE_IDLE) { + ret = -EINVAL; + goto exit; + } + + if (iwqp->iwarp_state == IRDMA_QP_STATE_INVALID) { + info.next_iwarp_state = IRDMA_QP_STATE_IDLE; + issue_modify_qp = 1; + } + break; + case IB_QPS_RTR: + if (iwqp->iwarp_state > IRDMA_QP_STATE_IDLE) { + ret = -EINVAL; + goto exit; + } + info.arp_cache_idx_valid = true; + info.cq_num_valid = true; + info.next_iwarp_state = IRDMA_QP_STATE_RTR; + issue_modify_qp = 1; + break; + case IB_QPS_RTS: + if (iwqp->ibqp_state < IB_QPS_RTR || + iwqp->ibqp_state == IB_QPS_ERR) { + ret = -EINVAL; + goto exit; + } + + info.arp_cache_idx_valid = true; + info.cq_num_valid = true; + info.next_iwarp_state = IRDMA_QP_STATE_RTS; + issue_modify_qp = 1; + if (iwdev->push_mode && udata && + dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + irdma_alloc_push_page(iwqp); + break; + case IB_QPS_SQD: + if (iwqp->hw_iwarp_state > IRDMA_QP_STATE_RTS) + goto exit; + + if (iwqp->iwarp_state == IRDMA_QP_STATE_CLOSING || + iwqp->iwarp_state < IRDMA_QP_STATE_RTS) + goto exit; + + if (iwqp->iwarp_state > IRDMA_QP_STATE_CLOSING) { + ret = -EINVAL; + goto exit; + } + + info.next_iwarp_state = IRDMA_QP_STATE_ERROR; + issue_modify_qp = 1; + break; + case IB_QPS_SQE: + case IB_QPS_ERR: + case IB_QPS_RESET: + if (iwqp->ibqp_state == IB_QPS_SQD) + break; + + if (iwqp->iwarp_state == IRDMA_QP_STATE_ERROR) { + spin_unlock_irqrestore(&iwqp->lock, flags); + if (udata) { + if (ib_copy_from_udata(&ureq, udata, + min(sizeof(ureq), udata->inlen))) + return -EINVAL; + + irdma_flush_wqes(iwqp, + (ureq.sq_flush ? IRDMA_FLUSH_SQ : 0) | + (ureq.rq_flush ? IRDMA_FLUSH_RQ : 0) | + IRDMA_REFLUSH); + return 0; + } + return -EINVAL; + } + + info.next_iwarp_state = IRDMA_QP_STATE_ERROR; + issue_modify_qp = 1; + break; + default: + ret = -EINVAL; + goto exit; + } + + iwqp->ibqp_state = attr->qp_state; + } + + ctx_info->send_cq_num = iwqp->iwscq->sc_cq.cq_uk.cq_id; + ctx_info->rcv_cq_num = iwqp->iwrcq->sc_cq.cq_uk.cq_id; + ret = dev->iw_priv_qp_ops->qp_setctx_roce(&iwqp->sc_qp, + iwqp->host_ctx.va, ctx_info); + spin_unlock_irqrestore(&iwqp->lock, flags); + + if (ret) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: setctx_roce\n"); + return -EINVAL; + } + + if (attr_mask & IB_QP_STATE) { + if (issue_modify_qp) { + ctx_info->rem_endpoint_idx = udp_info->arp_idx; + if (irdma_hw_modify_qp(iwdev, iwqp, &info, true)) + return -EINVAL; + spin_lock_irqsave(&iwqp->lock, flags); + if (iwqp->iwarp_state == info.curr_iwarp_state) { + iwqp->iwarp_state = info.next_iwarp_state; + iwqp->ibqp_state = attr->qp_state; + } + if (iwqp->ibqp_state > IB_QPS_RTS && + !iwqp->flush_issued) { + iwqp->flush_issued = 1; + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_flush_wqes(iwqp, IRDMA_FLUSH_SQ | + IRDMA_FLUSH_RQ | + IRDMA_FLUSH_WAIT); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + } + } else { + iwqp->ibqp_state = attr->qp_state; + } + if (udata && dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) { + struct irdma_ucontext *ucontext; + + ucontext = rdma_udata_to_drv_context(udata, + struct irdma_ucontext, ibucontext); + if (iwqp->sc_qp.push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX || + iwqp->ibqp_state != IB_QPS_RTS) { + uresp.push_valid = 0; + } else { + ret = irdma_setup_push_mmap_entries(ucontext, + iwqp, &uresp.push_wqe_mmap_key, + &uresp.push_db_mmap_key); + if (ret) + return ret; + + uresp.push_valid = 1; + uresp.push_offset = iwqp->sc_qp.push_offset; + } + ret = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), + udata->outlen)); + if (ret) { + irdma_remove_push_mmap_entries(iwqp); + ibdev_dbg(to_ibdev(iwdev), + "VERBS: copy_to_udata failed\n"); + return ret; + } + } + } + + return 0; +exit: + spin_unlock_irqrestore(&iwqp->lock, flags); + + return ret; +} + +/** + * irdma_modify_qp - modify qp request + * @ibqp: qp's pointer for modify + * @attr: access attributes + * @attr_mask: state mask + * @udata: user data + */ +int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask, + struct ib_udata *udata) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + struct irdma_qp_host_ctx_info *ctx_info; + struct irdma_tcp_offload_info *tcp_info; + struct irdma_iwarp_offload_info *offload_info; + struct irdma_modify_qp_info info = {}; + struct irdma_modify_qp_resp uresp = {}; + struct irdma_modify_qp_req ureq = {}; + u8 issue_modify_qp = 0; + u8 dont_wait = 0; + int err; + unsigned long flags; + + ctx_info = &iwqp->ctx_info; + offload_info = &iwqp->iwarp_info; + tcp_info = &iwqp->tcp_info; + + wait_event(iwqp->mod_qp_waitq, !atomic_read(&iwqp->hw_mod_qp_pend)); + + spin_lock_irqsave(&iwqp->lock, flags); + if (attr_mask & IB_QP_STATE) { + info.curr_iwarp_state = iwqp->iwarp_state; + switch (attr->qp_state) { + case IB_QPS_INIT: + case IB_QPS_RTR: + if (iwqp->iwarp_state > IRDMA_QP_STATE_IDLE) { + err = -EINVAL; + goto exit; + } + + if (iwqp->iwarp_state == IRDMA_QP_STATE_INVALID) { + info.next_iwarp_state = IRDMA_QP_STATE_IDLE; + issue_modify_qp = 1; + } + if (iwdev->push_mode && udata && + dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + irdma_alloc_push_page(iwqp); + break; + case IB_QPS_RTS: + if (iwqp->iwarp_state > IRDMA_QP_STATE_RTS || + !iwqp->cm_id) { + err = -EINVAL; + goto exit; + } + + issue_modify_qp = 1; + iwqp->hw_tcp_state = IRDMA_TCP_STATE_ESTABLISHED; + iwqp->hte_added = 1; + info.next_iwarp_state = IRDMA_QP_STATE_RTS; + info.tcp_ctx_valid = true; + info.ord_valid = true; + info.arp_cache_idx_valid = true; + info.cq_num_valid = true; + break; + case IB_QPS_SQD: + if (iwqp->hw_iwarp_state > IRDMA_QP_STATE_RTS) { + err = 0; + goto exit; + } + + if (iwqp->iwarp_state == IRDMA_QP_STATE_CLOSING || + iwqp->iwarp_state < IRDMA_QP_STATE_RTS) { + err = 0; + goto exit; + } + + if (iwqp->iwarp_state > IRDMA_QP_STATE_CLOSING) { + err = -EINVAL; + goto exit; + } + + info.next_iwarp_state = IRDMA_QP_STATE_CLOSING; + issue_modify_qp = 1; + break; + case IB_QPS_SQE: + if (iwqp->iwarp_state >= IRDMA_QP_STATE_TERMINATE) { + err = -EINVAL; + goto exit; + } + + info.next_iwarp_state = IRDMA_QP_STATE_TERMINATE; + issue_modify_qp = 1; + break; + case IB_QPS_ERR: + case IB_QPS_RESET: + if (iwqp->iwarp_state == IRDMA_QP_STATE_ERROR) { + spin_unlock_irqrestore(&iwqp->lock, flags); + if (udata) { + if (ib_copy_from_udata(&ureq, udata, + min(sizeof(ureq), udata->inlen))) + return -EINVAL; + + irdma_flush_wqes(iwqp, + (ureq.sq_flush ? IRDMA_FLUSH_SQ : 0) | + (ureq.rq_flush ? IRDMA_FLUSH_RQ : 0) | + IRDMA_REFLUSH); + + return 0; + } + return -EINVAL; + } + + if (iwqp->sc_qp.term_flags) { + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_terminate_del_timer(&iwqp->sc_qp); + spin_lock_irqsave(&iwqp->lock, flags); + } + info.next_iwarp_state = IRDMA_QP_STATE_ERROR; + if (iwqp->hw_tcp_state > IRDMA_TCP_STATE_CLOSED && + iwdev->iw_status && + iwqp->hw_tcp_state != IRDMA_TCP_STATE_TIME_WAIT) + info.reset_tcp_conn = true; + else + dont_wait = 1; + + issue_modify_qp = 1; + info.next_iwarp_state = IRDMA_QP_STATE_ERROR; + break; + default: + err = -EINVAL; + goto exit; + } + + iwqp->ibqp_state = attr->qp_state; + } + if (attr_mask & IB_QP_ACCESS_FLAGS) { + ctx_info->iwarp_info_valid = true; + if (attr->qp_access_flags & IB_ACCESS_LOCAL_WRITE) + offload_info->wr_rdresp_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_WRITE) + offload_info->wr_rdresp_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_READ) + offload_info->rd_en = true; + if (attr->qp_access_flags & IB_ACCESS_MW_BIND) + offload_info->bind_en = true; + + if (iwqp->user_mode) { + offload_info->rd_en = true; + offload_info->wr_rdresp_en = true; + offload_info->priv_mode_en = false; + } + } + + if (ctx_info->iwarp_info_valid) { + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + int ret; + + ctx_info->send_cq_num = iwqp->iwscq->sc_cq.cq_uk.cq_id; + ctx_info->rcv_cq_num = iwqp->iwrcq->sc_cq.cq_uk.cq_id; + ret = dev->iw_priv_qp_ops->qp_setctx(&iwqp->sc_qp, + iwqp->host_ctx.va, + ctx_info); + if (ret) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: setting QP context\n"); + err = -EINVAL; + goto exit; + } + } + spin_unlock_irqrestore(&iwqp->lock, flags); + + if (attr_mask & IB_QP_STATE) { + if (issue_modify_qp) { + ctx_info->rem_endpoint_idx = tcp_info->arp_idx; + if (irdma_hw_modify_qp(iwdev, iwqp, &info, true)) + return -EINVAL; + } + + spin_lock_irqsave(&iwqp->lock, flags); + if (iwqp->iwarp_state == info.curr_iwarp_state) { + iwqp->iwarp_state = info.next_iwarp_state; + iwqp->ibqp_state = attr->qp_state; + } + spin_unlock_irqrestore(&iwqp->lock, flags); + } + + if (issue_modify_qp && iwqp->ibqp_state > IB_QPS_RTS) { + if (dont_wait) { + if (iwqp->cm_id && iwqp->hw_tcp_state) { + spin_lock_irqsave(&iwqp->lock, flags); + iwqp->hw_tcp_state = IRDMA_TCP_STATE_CLOSED; + iwqp->last_aeq = IRDMA_AE_RESET_SENT; + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_cm_disconn(iwqp); + } + } else { + int close_timer_started; + + spin_lock_irqsave(&iwqp->lock, flags); + close_timer_started = atomic_inc_return(&iwqp->close_timer_started); + if (iwqp->cm_id && close_timer_started == 1) { + iwqp->cm_id->add_ref(iwqp->cm_id); + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_schedule_cm_timer(iwqp->cm_node, + (struct irdma_puda_buf *)iwqp, + IRDMA_TIMER_TYPE_CLOSE, + 1, + 0); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + } + } + } + if (attr_mask & IB_QP_STATE && udata && + dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) { + struct irdma_ucontext *ucontext; + + ucontext = rdma_udata_to_drv_context(udata, + struct irdma_ucontext, ibucontext); + if (iwqp->sc_qp.push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX || + iwqp->ibqp_state != IB_QPS_RTS) { + uresp.push_valid = 0; + } else { + err = irdma_setup_push_mmap_entries(ucontext, + iwqp, &uresp.push_wqe_mmap_key, + &uresp.push_db_mmap_key); + if (err) + return err; + + uresp.push_valid = 1; + uresp.push_offset = iwqp->sc_qp.push_offset; + } + err = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), + udata->outlen)); + if (err) { + irdma_remove_push_mmap_entries(iwqp); + ibdev_dbg(to_ibdev(iwdev), + "VERBS: copy_to_udata failed\n"); + return err; + } + } + + return 0; +exit: + spin_unlock_irqrestore(&iwqp->lock, flags); + + return err; +} + +/** + * irdma_cq_free_rsrc - free up resources for cq + * @rf: RDMA PCI function + * @iwcq: cq ptr + */ +static void irdma_cq_free_rsrc(struct irdma_pci_f *rf, struct irdma_cq *iwcq) +{ + struct irdma_sc_cq *cq = &iwcq->sc_cq; + + if (!iwcq->user_mode) { + dma_free_coherent(hw_to_dev(rf->sc_dev.hw), iwcq->kmem.size, + iwcq->kmem.va, iwcq->kmem.pa); + iwcq->kmem.va = NULL; + dma_free_coherent(hw_to_dev(rf->sc_dev.hw), + iwcq->kmem_shadow.size, + iwcq->kmem_shadow.va, iwcq->kmem_shadow.pa); + iwcq->kmem_shadow.va = NULL; + } + + irdma_free_rsrc(rf, rf->allocated_cqs, cq->cq_uk.cq_id); +} + +/** + * irdma_free_cqbuf - worker to free a cq buffer + * @work: provides access to the cq buffer to free + */ +static void irdma_free_cqbuf(struct work_struct *work) +{ + struct irdma_cq_buf *cq_buf = container_of(work, struct irdma_cq_buf, work); + + dma_free_coherent(hw_to_dev(cq_buf->hw), cq_buf->kmem_buf.size, + cq_buf->kmem_buf.va, cq_buf->kmem_buf.pa); + cq_buf->kmem_buf.va = NULL; + kfree(cq_buf); +} + +/** + * irdma_process_resize_list - remove resized cq buffers from the resize_list + * @iwcq: cq which owns the resize_list + * @iwdev: irdma device + * @lcqe_buf: the buffer where the last cqe is received + */ +static int irdma_process_resize_list(struct irdma_cq *iwcq, + struct irdma_device *iwdev, + struct irdma_cq_buf *lcqe_buf) +{ + struct list_head *tmp_node, *list_node; + struct irdma_cq_buf *cq_buf; + int cnt = 0; + + list_for_each_safe(list_node, tmp_node, &iwcq->resize_list) { + cq_buf = list_entry(list_node, struct irdma_cq_buf, list); + if (cq_buf == lcqe_buf) + return cnt; + + list_del(&cq_buf->list); + queue_work(iwdev->cleanup_wq, &cq_buf->work); + cnt++; + } + + return cnt; +} + +/** + * irdma_destroy_cq - destroy cq + * @ib_cq: cq pointer + * @udata: user data + */ +static void irdma_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata) +{ + struct irdma_cq *iwcq; + struct irdma_device *iwdev; + struct irdma_sc_cq *cq; + unsigned long flags; + + iwcq = to_iwcq(ib_cq); + iwdev = to_iwdev(ib_cq->device); + + if (!list_empty(&iwcq->resize_list)) { + spin_lock_irqsave(&iwcq->lock, flags); + irdma_process_resize_list(iwcq, iwdev, NULL); + spin_unlock_irqrestore(&iwcq->lock, flags); + } + cq = &iwcq->sc_cq; + irdma_cq_wq_destroy(iwdev->rf, cq); + irdma_cq_free_rsrc(iwdev->rf, iwcq); +} + +/** + * irdma_resize_cq - resize cq + * @ibcq: cq to be resized + * @entries: desired cq size + * @udata: user data + */ +static int irdma_resize_cq(struct ib_cq *ibcq, int entries, + struct ib_udata *udata) +{ + struct irdma_cq *iwcq = to_iwcq(ibcq); + struct irdma_sc_dev *dev = iwcq->sc_cq.dev; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_modify_cq_info *m_info; + struct irdma_modify_cq_info info = {}; + struct irdma_dma_mem kmem_buf; + struct irdma_cq_mr *cqmr_buf; + struct irdma_pbl *iwpbl_buf; + struct irdma_device *iwdev; + struct irdma_pci_f *rf; + struct irdma_cq_buf *cq_buf = NULL; + enum irdma_status_code status = 0; + unsigned long flags; + + iwdev = to_iwdev(ibcq->device); + rf = iwdev->rf; + + if (!(rf->sc_dev.hw_attrs.uk_attrs.feature_flags & + IRDMA_FEATURE_CQ_RESIZE)) + return -ENOTSUPP; + + if (entries > rf->max_cqe) + return -EINVAL; + + if (!iwcq->user_mode) { + entries++; + if (rf->sc_dev.hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + entries *= 2; + } + + info.cq_size = max(entries, 4); + + if (info.cq_size == iwcq->sc_cq.cq_uk.cq_size - 1) + return 0; + + if (udata) { + struct irdma_resize_cq_req req = {}; + struct irdma_ucontext *ucontext = + rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + + /* CQ resize not supported with legacy GEN_1 libi40iw */ + if (ucontext->abi_ver <= 5) + return -ENOTSUPP; + + if (ib_copy_from_udata(&req, udata, + min(sizeof(req), udata->inlen))) + return -EINVAL; + + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + iwpbl_buf = irdma_get_pbl((unsigned long)req.user_cq_buffer, + &ucontext->cq_reg_mem_list); + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + + if (!iwpbl_buf) + return -ENOMEM; + + cqmr_buf = &iwpbl_buf->cq_mr; + if (iwpbl_buf->pbl_allocated) { + info.virtual_map = true; + info.pbl_chunk_size = 1; + info.first_pm_pbl_idx = cqmr_buf->cq_pbl.idx; + } else { + info.cq_pa = cqmr_buf->cq_pbl.addr; + } + } else { + /* Kmode CQ resize */ + int rsize; + + rsize = info.cq_size * sizeof(struct irdma_cqe); + kmem_buf.size = ALIGN(round_up(rsize, 256), 256); + kmem_buf.va = dma_alloc_coherent(hw_to_dev(dev->hw), + kmem_buf.size, &kmem_buf.pa, + GFP_KERNEL); + if (!kmem_buf.va) + return -ENOMEM; + + info.cq_base = kmem_buf.va; + info.cq_pa = kmem_buf.pa; + cq_buf = kzalloc(sizeof(*cq_buf), GFP_KERNEL); + if (!cq_buf) { + dma_free_coherent(hw_to_dev(dev->hw), kmem_buf.size, + kmem_buf.va, kmem_buf.pa); + kmem_buf.va = NULL; + return -ENOMEM; + } + } + + cqp_request = irdma_get_cqp_request(&rf->cqp, true); + if (!cqp_request) { + dma_free_coherent(hw_to_dev(dev->hw), kmem_buf.size, + kmem_buf.va, kmem_buf.pa); + kmem_buf.va = NULL; + kfree(cq_buf); + return -ENOMEM; + } + + info.shadow_read_threshold = iwcq->sc_cq.shadow_read_threshold; + info.ceq_valid = false; + info.cq_resize = true; + + cqp_info = &cqp_request->info; + m_info = &cqp_info->in.u.cq_modify.info; + memcpy(m_info, &info, sizeof(*m_info)); + + cqp_info->cqp_cmd = IRDMA_OP_CQ_MODIFY; + cqp_info->in.u.cq_modify.cq = &iwcq->sc_cq; + cqp_info->in.u.cq_modify.scratch = (uintptr_t)cqp_request; + cqp_info->post_sq = 1; + status = irdma_handle_cqp_op(rf, cqp_request); + if (status) { + dma_free_coherent(hw_to_dev(dev->hw), kmem_buf.size, + kmem_buf.va, kmem_buf.pa); + kmem_buf.va = NULL; + kfree(cq_buf); + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP Resize CQ fail"); + return -EPROTO; + } + + spin_lock_irqsave(&iwcq->lock, flags); + if (cq_buf) { + cq_buf->kmem_buf = iwcq->kmem; + cq_buf->hw = dev->hw; + memcpy(&cq_buf->cq_uk, &iwcq->sc_cq.cq_uk, sizeof(cq_buf->cq_uk)); + INIT_WORK(&cq_buf->work, irdma_free_cqbuf); + list_add_tail(&cq_buf->list, &iwcq->resize_list); + iwcq->kmem = kmem_buf; + } + + dev->iw_priv_cq_ops->cq_resize(&iwcq->sc_cq, &info); + ibcq->cqe = info.cq_size - 1; + spin_unlock_irqrestore(&iwcq->lock, flags); + + return 0; +} + +/** + * irdma_create_cq - create cq + * @ibcq: CQ allocated + * @attr: attributes for cq + * @udata: user data + */ +static int irdma_create_cq(struct ib_cq *ibcq, + const struct ib_cq_init_attr *attr, + struct ib_udata *udata) +{ + struct ib_device *ibdev = ibcq->device; + struct irdma_device *iwdev = to_iwdev(ibdev); + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_cq *iwcq = to_iwcq(ibcq); + u32 cq_num = 0; + struct irdma_sc_cq *cq; + struct irdma_sc_dev *dev = &rf->sc_dev; + struct irdma_cq_init_info info = {}; + enum irdma_status_code status; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_cq_uk_init_info *ukinfo = &info.cq_uk_init_info; + unsigned long flags; + int err_code; + int entries = attr->cqe; + + err_code = irdma_alloc_rsrc(rf, rf->allocated_cqs, rf->max_cq, &cq_num, + &rf->next_cq); + if (err_code) + return err_code; + + cq = &iwcq->sc_cq; + cq->back_cq = iwcq; + spin_lock_init(&iwcq->lock); + INIT_LIST_HEAD(&iwcq->resize_list); + info.dev = dev; + ukinfo->cq_size = max(entries, 4); + ukinfo->cq_id = cq_num; + iwcq->ibcq.cqe = info.cq_uk_init_info.cq_size; + if (attr->comp_vector < rf->ceqs_count) + info.ceq_id = attr->comp_vector; + info.ceq_id_valid = true; + info.ceqe_mask = 1; + info.type = IRDMA_CQ_TYPE_IWARP; + info.vsi = &iwdev->vsi; + + if (udata) { + struct irdma_ucontext *ucontext; + struct irdma_create_cq_req req = {}; + struct irdma_cq_mr *cqmr; + struct irdma_pbl *iwpbl; + struct irdma_pbl *iwpbl_shadow; + struct irdma_cq_mr *cqmr_shadow; + + iwcq->user_mode = true; + ucontext = + rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + if (ib_copy_from_udata(&req, udata, + min(sizeof(req), udata->inlen))) { + err_code = -EFAULT; + goto cq_free_rsrc; + } + + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + iwpbl = irdma_get_pbl((unsigned long)req.user_cq_buf, + &ucontext->cq_reg_mem_list); + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + if (!iwpbl) { + err_code = -EPROTO; + goto cq_free_rsrc; + } + + iwcq->iwpbl = iwpbl; + iwcq->cq_mem_size = 0; + cqmr = &iwpbl->cq_mr; + + if (rf->sc_dev.hw_attrs.uk_attrs.feature_flags & + IRDMA_FEATURE_CQ_RESIZE && + ucontext->abi_ver > 5) { + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + iwpbl_shadow = irdma_get_pbl( + (unsigned long)req.user_shadow_area, + &ucontext->cq_reg_mem_list); + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + + if (!iwpbl_shadow) { + err_code = -EPROTO; + goto cq_free_rsrc; + } + iwcq->iwpbl_shadow = iwpbl_shadow; + cqmr_shadow = &iwpbl_shadow->cq_mr; + info.shadow_area_pa = cqmr_shadow->cq_pbl.addr; + cqmr->split = true; + } else { + info.shadow_area_pa = cqmr->shadow; + } + if (iwpbl->pbl_allocated) { + info.virtual_map = true; + info.pbl_chunk_size = 1; + info.first_pm_pbl_idx = cqmr->cq_pbl.idx; + } else { + info.cq_base_pa = cqmr->cq_pbl.addr; + } + } else { + /* Kmode allocations */ + int rsize; + + if (entries > rf->max_cqe) { + err_code = -EINVAL; + goto cq_free_rsrc; + } + + entries++; + if (dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + entries *= 2; + ukinfo->cq_size = entries; + + rsize = info.cq_uk_init_info.cq_size * sizeof(struct irdma_cqe); + iwcq->kmem.size = ALIGN(round_up(rsize, 256), 256); + iwcq->kmem.va = dma_alloc_coherent(hw_to_dev(dev->hw), + iwcq->kmem.size, + &iwcq->kmem.pa, GFP_KERNEL); + if (!iwcq->kmem.va) { + err_code = -ENOMEM; + goto cq_free_rsrc; + } + + iwcq->kmem_shadow.size = ALIGN(IRDMA_SHADOW_AREA_SIZE << 3, + 64); + iwcq->kmem_shadow.va = dma_alloc_coherent(hw_to_dev(dev->hw), + iwcq->kmem_shadow.size, + &iwcq->kmem_shadow.pa, + GFP_KERNEL); + if (!iwcq->kmem_shadow.va) { + err_code = -ENOMEM; + goto cq_free_rsrc; + } + info.shadow_area_pa = iwcq->kmem_shadow.pa; + ukinfo->shadow_area = iwcq->kmem_shadow.va; + ukinfo->cq_base = iwcq->kmem.va; + info.cq_base_pa = iwcq->kmem.pa; + } + + if (dev->hw_attrs.uk_attrs.hw_rev > IRDMA_GEN_1) + info.shadow_read_threshold = min(info.cq_uk_init_info.cq_size / 2, + (u32)IRDMA_MAX_CQ_READ_THRESH); + if (dev->iw_priv_cq_ops->cq_init(cq, &info)) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: init cq fail\n"); + err_code = -EPROTO; + goto cq_free_rsrc; + } + + cqp_request = irdma_get_cqp_request(&rf->cqp, true); + if (!cqp_request) { + err_code = -ENOMEM; + goto cq_free_rsrc; + } + + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_CQ_CREATE; + cqp_info->post_sq = 1; + cqp_info->in.u.cq_create.cq = cq; + cqp_info->in.u.cq_create.check_overflow = true; + cqp_info->in.u.cq_create.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(rf, cqp_request); + if (status) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP Create CQ fail"); + err_code = -ENOMEM; + goto cq_free_rsrc; + } + + if (udata) { + struct irdma_create_cq_resp resp = {}; + + resp.cq_id = info.cq_uk_init_info.cq_id; + resp.cq_size = info.cq_uk_init_info.cq_size; + if (ib_copy_to_udata(udata, &resp, + min(sizeof(resp), udata->outlen))) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: copy to user data\n"); + err_code = -EPROTO; + goto cq_destroy; + } + } + return 0; + +cq_destroy: + irdma_cq_wq_destroy(rf, cq); +cq_free_rsrc: + irdma_cq_free_rsrc(rf, iwcq); + + return err_code; +} + +/** + * irdma_get_user_access - get hw access from IB access + * @acc: IB access to return hw access + */ +static inline u16 irdma_get_user_access(int acc) +{ + u16 access = 0; + + access |= (acc & IB_ACCESS_LOCAL_WRITE) ? + IRDMA_ACCESS_FLAGS_LOCALWRITE : 0; + access |= (acc & IB_ACCESS_REMOTE_WRITE) ? + IRDMA_ACCESS_FLAGS_REMOTEWRITE : 0; + access |= (acc & IB_ACCESS_REMOTE_READ) ? + IRDMA_ACCESS_FLAGS_REMOTEREAD : 0; + access |= (acc & IB_ACCESS_MW_BIND) ? + IRDMA_ACCESS_FLAGS_BIND_WINDOW : 0; + + return access; +} + +/** + * irdma_free_stag - free stag resource + * @iwdev: irdma device + * @stag: stag to free + */ +static void irdma_free_stag(struct irdma_device *iwdev, u32 stag) +{ + u32 stag_idx; + + stag_idx = (stag & iwdev->rf->mr_stagmask) >> IRDMA_CQPSQ_STAG_IDX_S; + irdma_free_rsrc(iwdev->rf, iwdev->rf->allocated_mrs, stag_idx); +} + +/** + * irdma_create_stag - create random stag + * @iwdev: irdma device + */ +static u32 irdma_create_stag(struct irdma_device *iwdev) +{ + u32 stag = 0; + u32 stag_index = 0; + u32 next_stag_index; + u32 driver_key; + u32 random; + u8 consumer_key; + int ret; + + get_random_bytes(&random, sizeof(random)); + consumer_key = (u8)random; + + driver_key = random & ~iwdev->rf->mr_stagmask; + next_stag_index = (random & iwdev->rf->mr_stagmask) >> 8; + next_stag_index %= iwdev->rf->max_mr; + + ret = irdma_alloc_rsrc(iwdev->rf, iwdev->rf->allocated_mrs, + iwdev->rf->max_mr, &stag_index, + &next_stag_index); + if (ret) + return stag; + stag = stag_index << IRDMA_CQPSQ_STAG_IDX_S; + stag |= driver_key; + stag += (u32)consumer_key; + + return stag; +} + +/** + * irdma_next_pbl_addr - Get next pbl address + * @pbl: pointer to a pble + * @pinfo: info pointer + * @idx: index + */ +static inline u64 *irdma_next_pbl_addr(u64 *pbl, struct irdma_pble_info **pinfo, + u32 *idx) +{ + *idx += 1; + if (!(*pinfo) || *idx != (*pinfo)->cnt) + return ++pbl; + *idx = 0; + (*pinfo)++; + + return (u64 *)(uintptr_t)(*pinfo)->addr; +} + +/** + * irdma_copy_user_pgaddrs - copy user page address to pble's os locally + * @iwmr: iwmr for IB's user page addresses + * @pbl: ple pointer to save 1 level or 0 level pble + * @level: indicated level 0, 1 or 2 + */ +static void irdma_copy_user_pgaddrs(struct irdma_mr *iwmr, u64 *pbl, + enum irdma_pble_level level) +{ + struct ib_umem *region = iwmr->region; + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + struct irdma_pble_info *pinfo; + struct ib_block_iter biter; + u32 idx = 0; + u32 pbl_cnt = 0; + + pinfo = (level == PBLE_LEVEL_1) ? NULL : palloc->level2.leaf; + + if (iwmr->type == IW_MEMREG_TYPE_QP) + iwpbl->qp_mr.sq_page = sg_page(region->sg_head.sgl); + + rdma_for_each_block(region->sg_head.sgl, &biter, region->nmap, + iwmr->page_size) { + *pbl = rdma_block_iter_dma_address(&biter); + if (++pbl_cnt == palloc->total_cnt) + break; + pbl = irdma_next_pbl_addr(pbl, &pinfo, &idx); + } +} + +/** + * irdma_check_mem_contiguous - check if pbls stored in arr are contiguous + * @arr: lvl1 pbl array + * @npages: page count + * @pg_size: page size + * + */ +static bool irdma_check_mem_contiguous(u64 *arr, u32 npages, u32 pg_size) +{ + u32 pg_idx; + + for (pg_idx = 0; pg_idx < npages; pg_idx++) { + if ((*arr + (pg_size * pg_idx)) != arr[pg_idx]) + return false; + } + + return true; +} + +/** + * irdma_check_mr_contiguous - check if MR is physically contiguous + * @palloc: pbl allocation struct + * @pg_size: page size + */ +static bool irdma_check_mr_contiguous(struct irdma_pble_alloc *palloc, + u32 pg_size) +{ + struct irdma_pble_level2 *lvl2 = &palloc->level2; + struct irdma_pble_info *leaf = lvl2->leaf; + u64 *arr = NULL; + u64 *start_addr = NULL; + int i; + bool ret; + + if (palloc->level == PBLE_LEVEL_1) { + arr = (u64 *)(uintptr_t)palloc->level1.addr; + ret = irdma_check_mem_contiguous(arr, palloc->total_cnt, + pg_size); + return ret; + } + + start_addr = (u64 *)(uintptr_t)leaf->addr; + + for (i = 0; i < lvl2->leaf_cnt; i++, leaf++) { + arr = (u64 *)(uintptr_t)leaf->addr; + if ((*start_addr + (i * pg_size * PBLE_PER_PAGE)) != *arr) + return false; + ret = irdma_check_mem_contiguous(arr, leaf->cnt, pg_size); + if (!ret) + return false; + } + + return true; +} + +/** + * irdma_setup_pbles - copy user pg address to pble's + * @rf: RDMA PCI function + * @iwmr: mr pointer for this memory registration + * @use_pbles: flag if to use pble's + */ +static int irdma_setup_pbles(struct irdma_pci_f *rf, struct irdma_mr *iwmr, + bool use_pbles) +{ + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + struct irdma_pble_info *pinfo; + u64 *pbl; + enum irdma_status_code status; + enum irdma_pble_level level = PBLE_LEVEL_1; + + if (use_pbles) { + status = irdma_get_pble(rf->pble_rsrc, palloc, iwmr->page_cnt, + false); + if (status) + return -ENOMEM; + + iwpbl->pbl_allocated = true; + level = palloc->level; + pinfo = (level == PBLE_LEVEL_1) ? &palloc->level1 : + palloc->level2.leaf; + pbl = (u64 *)(uintptr_t)pinfo->addr; + } else { + pbl = iwmr->pgaddrmem; + } + + irdma_copy_user_pgaddrs(iwmr, pbl, level); + + if (use_pbles) + iwmr->pgaddrmem[0] = *pbl; + + return 0; +} + +/** + * irdma_handle_q_mem - handle memory for qp and cq + * @iwdev: irdma device + * @req: information for q memory management + * @iwpbl: pble struct + * @use_pbles: flag to use pble + */ +static int irdma_handle_q_mem(struct irdma_device *iwdev, + struct irdma_mem_reg_req *req, + struct irdma_pbl *iwpbl, bool use_pbles) +{ + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + struct irdma_mr *iwmr = iwpbl->iwmr; + struct irdma_qp_mr *qpmr = &iwpbl->qp_mr; + struct irdma_cq_mr *cqmr = &iwpbl->cq_mr; + struct irdma_hmc_pble *hmc_p; + u64 *arr = iwmr->pgaddrmem; + u32 pg_size; + int err = 0; + int total; + bool ret = true; + + total = req->sq_pages + req->rq_pages + req->cq_pages; + pg_size = iwmr->page_size; + err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles); + if (err) + return err; + + if (use_pbles && palloc->level != PBLE_LEVEL_1) { + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); + iwpbl->pbl_allocated = false; + return -ENOMEM; + } + + if (use_pbles) + arr = (u64 *)(uintptr_t)palloc->level1.addr; + + switch (iwmr->type) { + case IW_MEMREG_TYPE_QP: + hmc_p = &qpmr->sq_pbl; + qpmr->shadow = (dma_addr_t)arr[total]; + + if (use_pbles) { + ret = irdma_check_mem_contiguous(arr, req->sq_pages, + pg_size); + if (ret) + ret = irdma_check_mem_contiguous(&arr[req->sq_pages], + req->rq_pages, + pg_size); + } + + if (!ret) { + hmc_p->idx = palloc->level1.idx; + hmc_p = &qpmr->rq_pbl; + hmc_p->idx = palloc->level1.idx + req->sq_pages; + } else { + hmc_p->addr = arr[0]; + hmc_p = &qpmr->rq_pbl; + hmc_p->addr = arr[req->sq_pages]; + } + break; + case IW_MEMREG_TYPE_CQ: + hmc_p = &cqmr->cq_pbl; + + if (!cqmr->split) + cqmr->shadow = (dma_addr_t)arr[total]; + + if (use_pbles) + ret = irdma_check_mem_contiguous(arr, req->cq_pages, + pg_size); + + if (!ret) + hmc_p->idx = palloc->level1.idx; + else + hmc_p->addr = arr[0]; + break; + default: + ibdev_dbg(to_ibdev(iwdev), "VERBS: MR type error\n"); + err = -EINVAL; + } + + if (use_pbles && ret) { + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); + iwpbl->pbl_allocated = false; + } + + return err; +} + +/** + * irdma_hw_alloc_mw - create the hw memory window + * @iwdev: irdma device + * @iwmr: pointer to memory window info + */ +static int irdma_hw_alloc_mw(struct irdma_device *iwdev, struct irdma_mr *iwmr) +{ + struct irdma_mw_alloc_info *info; + struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd); + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.mw_alloc.info; + memset(info, 0, sizeof(*info)); + if (iwmr->ibmw.type == IB_MW_TYPE_1) + info->mw_wide = true; + + info->page_size = PAGE_SIZE; + info->mw_stag_index = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S; + info->pd_id = iwpd->sc_pd.pd_id; + info->remote_access = true; + cqp_info->cqp_cmd = IRDMA_OP_MW_ALLOC; + cqp_info->post_sq = 1; + cqp_info->in.u.mw_alloc.dev = &iwdev->rf->sc_dev; + cqp_info->in.u.mw_alloc.scratch = (uintptr_t)cqp_request; + if (irdma_handle_cqp_op(iwdev->rf, cqp_request)) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP allow MW failed\n"); + return -ENOMEM; + } + + return 0; +} + +/** + * irdma_alloc_mw + * @pd: Protection domain + * @type: Window type + * @udata: user data pointer + */ +static struct ib_mw *irdma_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_mr *iwmr; + int err_code; + u32 stag; + + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); + if (!iwmr) + return ERR_PTR(-ENOMEM); + + stag = irdma_create_stag(iwdev); + if (!stag) { + err_code = -ENOMEM; + goto err; + } + + iwmr->stag = stag; + iwmr->ibmw.rkey = stag; + iwmr->ibmw.pd = pd; + iwmr->ibmw.type = type; + iwmr->ibmw.device = pd->device; + iwmr->type = IW_MEMREG_TYPE_MW; + + err_code = irdma_hw_alloc_mw(iwdev, iwmr); + if (err_code) { + irdma_free_stag(iwdev, stag); + goto err; + } + + return &iwmr->ibmw; + +err: + kfree(iwmr); + + return ERR_PTR(err_code); +} + +/** + * irdma_dealloc_mw + * @ibmw: memory window structure. + */ +static int irdma_dealloc_mw(struct ib_mw *ibmw) +{ + struct ib_pd *ibpd = ibmw->pd; + struct irdma_pd *iwpd = to_iwpd(ibpd); + struct irdma_mr *iwmr = to_iwmr((struct ib_mr *)ibmw); + struct irdma_device *iwdev = to_iwdev(ibmw->device); + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + struct irdma_dealloc_stag_info *info; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.dealloc_stag.info; + memset(info, 0, sizeof(*info)); + info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff; + info->stag_idx = RS_64_1(ibmw->rkey, IRDMA_CQPSQ_STAG_IDX_S); + info->mr = false; + cqp_info->cqp_cmd = IRDMA_OP_DEALLOC_STAG; + cqp_info->post_sq = 1; + cqp_info->in.u.dealloc_stag.dev = &iwdev->rf->sc_dev; + cqp_info->in.u.dealloc_stag.scratch = (uintptr_t)cqp_request; + if (irdma_handle_cqp_op(iwdev->rf, cqp_request)) + ibdev_dbg(to_ibdev(iwdev), + "VERBS: CQP-OP dealloc MW failed for stag_idx = 0x%x\n", + info->stag_idx); + irdma_free_stag(iwdev, iwmr->stag); + kfree(iwmr); + + return 0; +} + +/** + * irdma_hw_alloc_stag - cqp command to allocate stag + * @iwdev: irdma device + * @iwmr: irdma mr pointer + */ +static int irdma_hw_alloc_stag(struct irdma_device *iwdev, + struct irdma_mr *iwmr) +{ + struct irdma_allocate_stag_info *info; + struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd); + enum irdma_status_code status; + int err = 0; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.alloc_stag.info; + memset(info, 0, sizeof(*info)); + info->page_size = PAGE_SIZE; + info->stag_idx = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S; + info->pd_id = iwpd->sc_pd.pd_id; + info->total_len = iwmr->len; + info->remote_access = true; + cqp_info->cqp_cmd = IRDMA_OP_ALLOC_STAG; + cqp_info->post_sq = 1; + cqp_info->in.u.alloc_stag.dev = &iwdev->rf->sc_dev; + cqp_info->in.u.alloc_stag.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) { + err = -ENOMEM; + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP MR alloc stag fail"); + } + + return err; +} + +/** + * irdma_alloc_mr - register stag for fast memory registration + * @pd: ibpd pointer + * @mr_type: memory for stag registrion + * @max_num_sg: man number of pages + * @udata: user data + */ +static struct ib_mr *irdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, + u32 max_num_sg, struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_pble_alloc *palloc; + struct irdma_pbl *iwpbl; + struct irdma_mr *iwmr; + enum irdma_status_code status; + u32 stag; + int err_code = -ENOMEM; + + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); + if (!iwmr) + return ERR_PTR(-ENOMEM); + + stag = irdma_create_stag(iwdev); + if (!stag) { + err_code = -ENOMEM; + goto err; + } + + iwmr->stag = stag; + iwmr->ibmr.rkey = stag; + iwmr->ibmr.lkey = stag; + iwmr->ibmr.pd = pd; + iwmr->ibmr.device = pd->device; + iwpbl = &iwmr->iwpbl; + iwpbl->iwmr = iwmr; + iwmr->type = IW_MEMREG_TYPE_MEM; + palloc = &iwpbl->pble_alloc; + iwmr->page_cnt = max_num_sg; + status = irdma_get_pble(iwdev->rf->pble_rsrc, palloc, iwmr->page_cnt, + true); + if (status) + goto err_get_pble; + + err_code = irdma_hw_alloc_stag(iwdev, iwmr); + if (err_code) + goto err_alloc_stag; + + iwpbl->pbl_allocated = true; + + return &iwmr->ibmr; +err_alloc_stag: + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); +err_get_pble: + irdma_free_stag(iwdev, stag); +err: + kfree(iwmr); + + return ERR_PTR(err_code); +} + +/** + * irdma_set_page - populate pbl list for fmr + * @ibmr: ib mem to access iwarp mr pointer + * @addr: page dma address fro pbl list + */ +static int irdma_set_page(struct ib_mr *ibmr, u64 addr) +{ + struct irdma_mr *iwmr = to_iwmr(ibmr); + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + u64 *pbl; + + if (unlikely(iwmr->npages == iwmr->page_cnt)) + return -ENOMEM; + + pbl = (u64 *)(uintptr_t)palloc->level1.addr; + pbl[iwmr->npages++] = addr; + + return 0; +} + +/** + * irdma_map_mr_sg - map of sg list for fmr + * @ibmr: ib mem to access iwarp mr pointer + * @sg: scatter gather list + * @sg_nents: number of sg pages + * @sg_offset: scatter gather list for fmr + */ +static int irdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset) +{ + struct irdma_mr *iwmr = to_iwmr(ibmr); + + iwmr->npages = 0; + + return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, irdma_set_page); +} + +/** + * irdma_drain_sq - drain the send queue + * @ibqp: ib qp pointer + */ +static void irdma_drain_sq(struct ib_qp *ibqp) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_sc_qp *qp = &iwqp->sc_qp; + + if (IRDMA_RING_MORE_WORK(qp->qp_uk.sq_ring)) + wait_for_completion(&iwqp->sq_drained); +} + +/** + * irdma_drain_rq - drain the receive queue + * @ibqp: ib qp pointer + */ +static void irdma_drain_rq(struct ib_qp *ibqp) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_sc_qp *qp = &iwqp->sc_qp; + + if (IRDMA_RING_MORE_WORK(qp->qp_uk.rq_ring)) + wait_for_completion(&iwqp->rq_drained); +} + +/** + * irdma_hwreg_mr - send cqp command for memory registration + * @iwdev: irdma device + * @iwmr: irdma mr pointer + * @access: access for MR + */ +static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, + u16 access) +{ + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_reg_ns_stag_info *stag_info; + struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd); + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + enum irdma_status_code status; + int err = 0; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + stag_info = &cqp_info->in.u.mr_reg_non_shared.info; + memset(stag_info, 0, sizeof(*stag_info)); + stag_info->va = (void *)(unsigned long)iwpbl->user_base; + stag_info->stag_idx = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S; + stag_info->stag_key = (u8)iwmr->stag; + stag_info->total_len = iwmr->len; + stag_info->access_rights = access; + stag_info->pd_id = iwpd->sc_pd.pd_id; + stag_info->addr_type = IRDMA_ADDR_TYPE_VA_BASED; + stag_info->page_size = iwmr->page_size; + + if (iwpbl->pbl_allocated) { + if (palloc->level == PBLE_LEVEL_1) { + stag_info->first_pm_pbl_index = palloc->level1.idx; + stag_info->chunk_size = 1; + } else { + stag_info->first_pm_pbl_index = palloc->level2.root.idx; + stag_info->chunk_size = 3; + } + } else { + stag_info->reg_addr_pa = iwmr->pgaddrmem[0]; + } + + cqp_info->cqp_cmd = IRDMA_OP_MR_REG_NON_SHARED; + cqp_info->post_sq = 1; + cqp_info->in.u.mr_reg_non_shared.dev = &iwdev->rf->sc_dev; + cqp_info->in.u.mr_reg_non_shared.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) { + err = -ENOMEM; + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP MR Reg fail"); + } + + return err; +} + +/** + * irdma_reg_user_mr - Register a user memory region + * @pd: ptr of pd + * @start: virtual start address + * @len: length of mr + * @virt: virtual address + * @acc: access of mr + * @udata: user data + */ +static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, + u64 virt, int acc, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_ucontext *ucontext = + rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + struct irdma_pble_alloc *palloc; + struct irdma_pbl *iwpbl; + struct irdma_mr *iwmr; + struct ib_umem *region; + struct irdma_mem_reg_req req; + u64 pbl_depth = 0; + u32 stag = 0; + u16 access; + u64 region_len; + bool use_pbles = false; + unsigned long flags; + int err = -EINVAL; + int ret, pg_shift; + + if (!udata) + return ERR_PTR(-EOPNOTSUPP); + + if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size) + return ERR_PTR(-EINVAL); + + region = ib_umem_get(pd->device, start, len, acc); + if (IS_ERR(region)) + return (struct ib_mr *)region; + + if (ib_copy_from_udata(&req, udata, min(sizeof(req), udata->inlen))) { + ib_umem_release(region); + return ERR_PTR(-EFAULT); + } + + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); + if (!iwmr) { + ib_umem_release(region); + return ERR_PTR(-ENOMEM); + } + + iwpbl = &iwmr->iwpbl; + iwpbl->iwmr = iwmr; + iwmr->region = region; + iwmr->ibmr.pd = pd; + iwmr->ibmr.device = pd->device; + iwmr->page_size = PAGE_SIZE; + + if (req.reg_type == IW_MEMREG_TYPE_MEM) + iwmr->page_size = ib_umem_find_best_pgsz(region, + SZ_4K | SZ_2M | SZ_1G, + virt); + region_len = region->length + (start & (iwmr->page_size - 1)); + pg_shift = ffs(iwmr->page_size) - 1; + pbl_depth = region_len >> pg_shift; + pbl_depth += (region_len & (iwmr->page_size - 1)) ? 1 : 0; + iwmr->len = region->length; + iwpbl->user_base = virt; + palloc = &iwpbl->pble_alloc; + iwmr->type = req.reg_type; + iwmr->page_cnt = (u32)pbl_depth; + + switch (req.reg_type) { + case IW_MEMREG_TYPE_QP: + use_pbles = ((req.sq_pages + req.rq_pages) > 2); + err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + if (err) + goto error; + + spin_lock_irqsave(&ucontext->qp_reg_mem_list_lock, flags); + list_add_tail(&iwpbl->list, &ucontext->qp_reg_mem_list); + iwpbl->on_list = true; + spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); + break; + case IW_MEMREG_TYPE_CQ: + use_pbles = (req.cq_pages > 1); + err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + if (err) + goto error; + + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + list_add_tail(&iwpbl->list, &ucontext->cq_reg_mem_list); + iwpbl->on_list = true; + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + break; + case IW_MEMREG_TYPE_MEM: + use_pbles = (iwmr->page_cnt != 1); + access = IRDMA_ACCESS_FLAGS_LOCALREAD; + + err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles); + if (err) + goto error; + + if (use_pbles) { + ret = irdma_check_mr_contiguous(palloc, + iwmr->page_size); + if (ret) { + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); + iwpbl->pbl_allocated = false; + } + } + + access |= irdma_get_user_access(acc); + stag = irdma_create_stag(iwdev); + if (!stag) { + err = -ENOMEM; + goto error; + } + + iwmr->stag = stag; + iwmr->ibmr.rkey = stag; + iwmr->ibmr.lkey = stag; + err = irdma_hwreg_mr(iwdev, iwmr, access); + if (err) { + irdma_free_stag(iwdev, stag); + goto error; + } + + break; + default: + goto error; + } + + iwmr->type = req.reg_type; + + return &iwmr->ibmr; + +error: + if (palloc->level != PBLE_LEVEL_0 && iwpbl->pbl_allocated) + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); + ib_umem_release(region); + kfree(iwmr); + + return ERR_PTR(err); +} + +/** + * irdma_reg_phys_mr - register kernel physical memory + * @pd: ibpd pointer + * @addr: physical address of memory to register + * @size: size of memory to register + * @acc: Access rights + * @iova_start: start of virtual address for physical buffers + */ +struct ib_mr *irdma_reg_phys_mr(struct ib_pd *pd, u64 addr, u64 size, int acc, + u64 *iova_start) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_pbl *iwpbl; + struct irdma_mr *iwmr; + enum irdma_status_code status; + u32 stag; + u16 access = IRDMA_ACCESS_FLAGS_LOCALREAD; + int ret; + + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); + if (!iwmr) + return ERR_PTR(-ENOMEM); + + iwmr->ibmr.pd = pd; + iwmr->ibmr.device = pd->device; + iwpbl = &iwmr->iwpbl; + iwpbl->iwmr = iwmr; + iwmr->type = IW_MEMREG_TYPE_MEM; + iwpbl->user_base = *iova_start; + stag = irdma_create_stag(iwdev); + if (!stag) { + ret = -ENOMEM; + goto err; + } + + access |= irdma_get_user_access(acc); + iwmr->stag = stag; + iwmr->ibmr.rkey = stag; + iwmr->ibmr.lkey = stag; + iwmr->page_cnt = 1; + iwmr->pgaddrmem[0] = addr; + iwmr->len = size; + status = irdma_hwreg_mr(iwdev, iwmr, access); + if (status) { + irdma_free_stag(iwdev, stag); + ret = -ENOMEM; + goto err; + } + + return &iwmr->ibmr; + +err: + kfree(iwmr); + + return ERR_PTR(ret); +} + +/** + * irdma_get_dma_mr - register physical mem + * @pd: ptr of pd + * @acc: access for memory + */ +static struct ib_mr *irdma_get_dma_mr(struct ib_pd *pd, int acc) +{ + u64 kva = 0; + + return irdma_reg_phys_mr(pd, 0, 0, acc, &kva); +} + +/** + * irdma_del_mem_list - Deleting pbl list entries for CQ/QP + * @iwmr: iwmr for IB's user page addresses + * @ucontext: ptr to user context + */ +static void irdma_del_memlist(struct irdma_mr *iwmr, + struct irdma_ucontext *ucontext) +{ + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + unsigned long flags; + + switch (iwmr->type) { + case IW_MEMREG_TYPE_CQ: + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + if (iwpbl->on_list) { + iwpbl->on_list = false; + list_del(&iwpbl->list); + } + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + break; + case IW_MEMREG_TYPE_QP: + spin_lock_irqsave(&ucontext->qp_reg_mem_list_lock, flags); + if (iwpbl->on_list) { + iwpbl->on_list = false; + list_del(&iwpbl->list); + } + spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); + break; + default: + break; + } +} + +/** + * irdma_dereg_mr - deregister mr + * @ib_mr: mr ptr for dereg + * @udata: user data + */ +static int irdma_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata) +{ + struct ib_pd *ibpd = ib_mr->pd; + struct irdma_pd *iwpd = to_iwpd(ibpd); + struct irdma_mr *iwmr = to_iwmr(ib_mr); + struct irdma_device *iwdev = to_iwdev(ib_mr->device); + enum irdma_status_code status; + struct irdma_dealloc_stag_info *info; + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + u32 stag_idx; + + if (iwmr->type != IW_MEMREG_TYPE_MEM) { + if (iwmr->region) { + struct irdma_ucontext *ucontext; + + ucontext = rdma_udata_to_drv_context(udata, + struct irdma_ucontext, + ibucontext); + irdma_del_memlist(iwmr, ucontext); + } + goto done; + } + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.dealloc_stag.info; + memset(info, 0, sizeof(*info)); + info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff; + info->stag_idx = RS_64_1(ib_mr->rkey, IRDMA_CQPSQ_STAG_IDX_S); + stag_idx = info->stag_idx; + info->mr = true; + if (iwpbl->pbl_allocated) + info->dealloc_pbl = true; + + cqp_info->cqp_cmd = IRDMA_OP_DEALLOC_STAG; + cqp_info->post_sq = 1; + cqp_info->in.u.dealloc_stag.dev = &iwdev->rf->sc_dev; + cqp_info->in.u.dealloc_stag.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) + ibdev_dbg(to_ibdev(iwdev), + "VERBS: CQP-OP dealloc failed for stag_idx = 0x%x\n", + stag_idx); + irdma_free_stag(iwdev, iwmr->stag); +done: + if (iwpbl->pbl_allocated) + irdma_free_pble(iwdev->rf->pble_rsrc, palloc); + ib_umem_release(iwmr->region); + kfree(iwmr); + + return 0; +} + +/** + * irdma_copy_sg_list - copy sg list for qp + * @sg_list: copied into sg_list + * @sgl: copy from sgl + * @num_sges: count of sg entries + */ +static void irdma_copy_sg_list(struct irdma_sge *sg_list, struct ib_sge *sgl, + int num_sges) +{ + unsigned int i; + + for (i = 0; (i < num_sges) && (i < IRDMA_MAX_WQ_FRAGMENT_COUNT); i++) { + sg_list[i].tag_off = sgl[i].addr; + sg_list[i].len = sgl[i].length; + sg_list[i].stag = sgl[i].lkey; + } +} + +/** + * irdma_post_send - kernel application wr + * @ibqp: qp ptr for wr + * @ib_wr: work request ptr + * @bad_wr: return of bad wr if err + */ +static int irdma_post_send(struct ib_qp *ibqp, + const struct ib_send_wr *ib_wr, + const struct ib_send_wr **bad_wr) +{ + struct irdma_qp *iwqp; + struct irdma_qp_uk *ukqp; + struct irdma_sc_dev *dev; + struct irdma_post_sq_info info; + enum irdma_status_code ret; + int err = 0; + unsigned long flags; + bool inv_stag; + struct irdma_ah *ah; + bool reflush = false; + + iwqp = to_iwqp(ibqp); + ukqp = &iwqp->sc_qp.qp_uk; + dev = &iwqp->iwdev->rf->sc_dev; + + spin_lock_irqsave(&iwqp->lock, flags); + if (iwqp->flush_issued && ukqp->sq_flush_complete) + reflush = true; + + while (ib_wr) { + memset(&info, 0, sizeof(info)); + inv_stag = false; + info.wr_id = (ib_wr->wr_id); + if ((ib_wr->send_flags & IB_SEND_SIGNALED) || iwqp->sig_all) + info.signaled = true; + if (ib_wr->send_flags & IB_SEND_FENCE) + info.read_fence = true; + switch (ib_wr->opcode) { + case IB_WR_SEND_WITH_IMM: + if (ukqp->qp_caps & IRDMA_SEND_WITH_IMM) { + info.imm_data_valid = true; + info.imm_data = ntohl(ib_wr->ex.imm_data); + } else { + err = -EINVAL; + break; + } + /* fall-through */ + case IB_WR_SEND: + /* fall-through */ + case IB_WR_SEND_WITH_INV: + if (ib_wr->opcode == IB_WR_SEND || + ib_wr->opcode == IB_WR_SEND_WITH_IMM) { + if (ib_wr->send_flags & IB_SEND_SOLICITED) + info.op_type = IRDMA_OP_TYPE_SEND_SOL; + else + info.op_type = IRDMA_OP_TYPE_SEND; + } else { + if (ib_wr->send_flags & IB_SEND_SOLICITED) + info.op_type = IRDMA_OP_TYPE_SEND_SOL_INV; + else + info.op_type = IRDMA_OP_TYPE_SEND_INV; + info.stag_to_inv = ib_wr->ex.invalidate_rkey; + } + + if (ib_wr->send_flags & IB_SEND_INLINE) { + info.op.inline_send.data = (void *)(unsigned long) + ib_wr->sg_list[0].addr; + info.op.inline_send.len = ib_wr->sg_list[0].length; + if (iwqp->ibqp.qp_type == IB_QPT_UD || + iwqp->ibqp.qp_type == IB_QPT_GSI) { + ah = to_iwah(ud_wr(ib_wr)->ah); + info.op.inline_send.ah_id = ah->sc_ah.ah_info.ah_idx; + info.op.inline_send.qkey = ud_wr(ib_wr)->remote_qkey; + info.op.inline_send.dest_qp = ud_wr(ib_wr)->remote_qpn; + } + ret = ukqp->qp_ops.iw_inline_send(ukqp, &info, + false); + } else { + info.op.send.num_sges = ib_wr->num_sge; + info.op.send.sg_list = (struct irdma_sge *) + ib_wr->sg_list; + if (iwqp->ibqp.qp_type == IB_QPT_UD || + iwqp->ibqp.qp_type == IB_QPT_GSI) { + ah = to_iwah(ud_wr(ib_wr)->ah); + info.op.send.ah_id = ah->sc_ah.ah_info.ah_idx; + info.op.send.qkey = ud_wr(ib_wr)->remote_qkey; + info.op.send.dest_qp = ud_wr(ib_wr)->remote_qpn; + } + ret = ukqp->qp_ops.iw_send(ukqp, &info, false); + } + + if (ret) { + if (ret == IRDMA_ERR_QP_TOOMANY_WRS_POSTED) + err = -ENOMEM; + else + err = -EINVAL; + } + break; + case IB_WR_RDMA_WRITE_WITH_IMM: + if (ukqp->qp_caps & IRDMA_WRITE_WITH_IMM) { + info.imm_data_valid = true; + info.imm_data = ntohl(ib_wr->ex.imm_data); + } else { + err = -EINVAL; + break; + } + /* fall-through */ + case IB_WR_RDMA_WRITE: + if (ib_wr->send_flags & IB_SEND_SOLICITED) + info.op_type = IRDMA_OP_TYPE_RDMA_WRITE_SOL; + else + info.op_type = IRDMA_OP_TYPE_RDMA_WRITE; + + if (ib_wr->send_flags & IB_SEND_INLINE) { + info.op.inline_rdma_write.data = (void *)(uintptr_t)ib_wr->sg_list[0].addr; + info.op.inline_rdma_write.len = ib_wr->sg_list[0].length; + info.op.inline_rdma_write.rem_addr.tag_off = rdma_wr(ib_wr)->remote_addr; + info.op.inline_rdma_write.rem_addr.stag = rdma_wr(ib_wr)->rkey; + ret = ukqp->qp_ops.iw_inline_rdma_write(ukqp, &info, false); + } else { + info.op.rdma_write.lo_sg_list = (void *)ib_wr->sg_list; + info.op.rdma_write.num_lo_sges = ib_wr->num_sge; + info.op.rdma_write.rem_addr.tag_off = rdma_wr(ib_wr)->remote_addr; + info.op.rdma_write.rem_addr.stag = rdma_wr(ib_wr)->rkey; + ret = ukqp->qp_ops.iw_rdma_write(ukqp, &info, false); + } + + if (ret) { + if (ret == IRDMA_ERR_QP_TOOMANY_WRS_POSTED) + err = -ENOMEM; + else + err = -EINVAL; + } + break; + case IB_WR_RDMA_READ_WITH_INV: + inv_stag = true; + /* fall-through*/ + case IB_WR_RDMA_READ: + if (ib_wr->num_sge > + dev->hw_attrs.uk_attrs.max_hw_read_sges) { + err = -EINVAL; + break; + } + info.op_type = IRDMA_OP_TYPE_RDMA_READ; + info.op.rdma_read.rem_addr.tag_off = rdma_wr(ib_wr)->remote_addr; + info.op.rdma_read.rem_addr.stag = rdma_wr(ib_wr)->rkey; + info.op.rdma_read.lo_sg_list = (void *)ib_wr->sg_list; + info.op.rdma_read.num_lo_sges = ib_wr->num_sge; + + ret = ukqp->qp_ops.iw_rdma_read(ukqp, &info, inv_stag, + false); + if (ret) { + if (ret == IRDMA_ERR_QP_TOOMANY_WRS_POSTED) + err = -ENOMEM; + else + err = -EINVAL; + } + break; + case IB_WR_LOCAL_INV: + info.op_type = IRDMA_OP_TYPE_INV_STAG; + info.op.inv_local_stag.target_stag = ib_wr->ex.invalidate_rkey; + ret = ukqp->qp_ops.iw_stag_local_invalidate(ukqp, &info, true); + if (ret) + err = -ENOMEM; + break; + case IB_WR_REG_MR: { + struct irdma_mr *iwmr = to_iwmr(reg_wr(ib_wr)->mr); + int flags = reg_wr(ib_wr)->access; + struct irdma_pble_alloc *palloc = &iwmr->iwpbl.pble_alloc; + struct irdma_fast_reg_stag_info info = {}; + + info.access_rights = IRDMA_ACCESS_FLAGS_LOCALREAD; + info.access_rights |= irdma_get_user_access(flags); + info.stag_key = reg_wr(ib_wr)->key & 0xff; + info.stag_idx = reg_wr(ib_wr)->key >> 8; + info.page_size = reg_wr(ib_wr)->mr->page_size; + info.wr_id = ib_wr->wr_id; + info.addr_type = IRDMA_ADDR_TYPE_VA_BASED; + info.va = (void *)(uintptr_t)iwmr->ibmr.iova; + info.total_len = iwmr->ibmr.length; + info.reg_addr_pa = *((u64 *)(uintptr_t)palloc->level1.addr); + info.first_pm_pbl_index = palloc->level1.idx; + info.local_fence = ib_wr->send_flags & IB_SEND_FENCE; + if (iwmr->npages > IRDMA_MIN_PAGES_PER_FMR) + info.chunk_size = 1; + ret = dev->iw_priv_qp_ops->iw_mr_fast_register(&iwqp->sc_qp, + &info, + true); + if (ret) + err = -ENOMEM; + break; + } + default: + err = -EINVAL; + ibdev_dbg(to_ibdev(iwqp->iwdev), + "VERBS: upost_send bad opcode = 0x%x\n", + ib_wr->opcode); + break; + } + + if (err) + break; + ib_wr = ib_wr->next; + } + + if (!iwqp->flush_issued && iwqp->hw_iwarp_state <= IRDMA_QP_STATE_RTS) { + ukqp->qp_ops.iw_qp_post_wr(ukqp); + spin_unlock_irqrestore(&iwqp->lock, flags); + } else if (reflush) { + ukqp->sq_flush_complete = false; + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_flush_wqes(iwqp, IRDMA_FLUSH_SQ | IRDMA_REFLUSH); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + } + if (err) + *bad_wr = ib_wr; + + return err; +} + +/** + * irdma_post_recv - post receive wr for kernel application + * @ibqp: ib qp pointer + * @ib_wr: work request for receive + * @bad_wr: bad wr caused an error + */ +static int irdma_post_recv(struct ib_qp *ibqp, + const struct ib_recv_wr *ib_wr, + const struct ib_recv_wr **bad_wr) +{ + struct irdma_qp *iwqp; + struct irdma_qp_uk *ukqp; + struct irdma_post_rq_info post_recv = {}; + struct irdma_sge sg_list[IRDMA_MAX_WQ_FRAGMENT_COUNT]; + enum irdma_status_code ret = 0; + unsigned long flags; + int err = 0; + bool reflush = false; + + iwqp = to_iwqp(ibqp); + ukqp = &iwqp->sc_qp.qp_uk; + + spin_lock_irqsave(&iwqp->lock, flags); + if (iwqp->flush_issued && ukqp->rq_flush_complete) + reflush = true; + + while (ib_wr) { + post_recv.num_sges = ib_wr->num_sge; + post_recv.wr_id = ib_wr->wr_id; + irdma_copy_sg_list(sg_list, ib_wr->sg_list, ib_wr->num_sge); + post_recv.sg_list = sg_list; + ret = ukqp->qp_ops.iw_post_receive(ukqp, &post_recv); + if (ret) { + ibdev_dbg(to_ibdev(iwqp->iwdev), + "VERBS: post_recv err %d\n", ret); + if (ret == IRDMA_ERR_QP_TOOMANY_WRS_POSTED) + err = -ENOMEM; + else + err = -EINVAL; + goto out; + } + + ib_wr = ib_wr->next; + } + +out: + if (reflush) { + ukqp->rq_flush_complete = false; + spin_unlock_irqrestore(&iwqp->lock, flags); + irdma_flush_wqes(iwqp, IRDMA_FLUSH_RQ | IRDMA_REFLUSH); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + } + + if (err) + *bad_wr = ib_wr; + + return err; +} + +/** + * irdma_process_cqe - process cqe info + * @entry: processed cqe + * @cq_poll_info: cqe info + */ +static void irdma_process_cqe(struct ib_wc *entry, + struct irdma_cq_poll_info *cq_poll_info) +{ + struct irdma_qp *iwqp; + struct irdma_sc_qp *qp; + + entry->wc_flags = 0; + entry->pkey_index = 0; + entry->wr_id = cq_poll_info->wr_id; + + if (cq_poll_info->error) { + if (cq_poll_info->comp_status == + IRDMA_COMPL_STATUS_FLUSHED) + entry->status = IB_WC_WR_FLUSH_ERR; + else if (cq_poll_info->comp_status == + IRDMA_COMPL_STATUS_INVALID_LEN) + entry->status = IB_WC_LOC_LEN_ERR; + else + entry->status = IB_WC_GENERAL_ERR; + entry->vendor_err = cq_poll_info->major_err << 16 | + cq_poll_info->minor_err; + } else { + entry->status = IB_WC_SUCCESS; + if (cq_poll_info->imm_valid) { + entry->ex.imm_data = htonl(cq_poll_info->imm_data); + entry->wc_flags |= IB_WC_WITH_IMM; + } + if (cq_poll_info->ud_smac_valid) { + ether_addr_copy(entry->smac, cq_poll_info->ud_smac); + entry->wc_flags |= IB_WC_WITH_SMAC; + } + + if (cq_poll_info->ud_vlan_valid) { + entry->vlan_id = cq_poll_info->ud_vlan & VLAN_VID_MASK; + entry->wc_flags |= IB_WC_WITH_VLAN; + entry->sl = cq_poll_info->ud_vlan >> VLAN_PRIO_SHIFT; + } else { + entry->sl = 0; + } + } + + switch (cq_poll_info->op_type) { + case IRDMA_OP_TYPE_RDMA_WRITE: + entry->opcode = IB_WC_RDMA_WRITE; + break; + case IRDMA_OP_TYPE_RDMA_READ_INV_STAG: + case IRDMA_OP_TYPE_RDMA_READ: + entry->opcode = IB_WC_RDMA_READ; + break; + case IRDMA_OP_TYPE_SEND_INV: + case IRDMA_OP_TYPE_SEND_SOL: + case IRDMA_OP_TYPE_SEND_SOL_INV: + case IRDMA_OP_TYPE_SEND: + entry->opcode = IB_WC_SEND; + if (cq_poll_info->stag_invalid_set) + entry->ex.invalidate_rkey = cq_poll_info->inv_stag; + break; + case IRDMA_OP_TYPE_REC: + entry->opcode = IB_WC_RECV; + break; + case IRDMA_OP_TYPE_REC_IMM: + entry->opcode = IB_WC_RECV_RDMA_WITH_IMM; + break; + default: + entry->opcode = IB_WC_RECV; + break; + } + + qp = cq_poll_info->qp_handle; + entry->qp = qp->qp_uk.back_qp; + + if (qp->qp_type == IRDMA_QP_TYPE_ROCE_UD) { + entry->src_qp = cq_poll_info->ud_src_qpn; + entry->slid = 0; + entry->wc_flags |= + (IB_WC_GRH | IB_WC_WITH_NETWORK_HDR_TYPE); + entry->network_hdr_type = cq_poll_info->ipv4 ? + RDMA_NETWORK_IPV4 : + RDMA_NETWORK_IPV6; + } else { + entry->src_qp = cq_poll_info->qp_id; + } + iwqp = qp->qp_uk.back_qp; + if (iwqp->iwarp_state > IRDMA_QP_STATE_RTS) { + if (!IRDMA_RING_MORE_WORK(qp->qp_uk.sq_ring)) + complete(&iwqp->sq_drained); + if (!IRDMA_RING_MORE_WORK(qp->qp_uk.rq_ring)) + complete(&iwqp->rq_drained); + } + entry->byte_len = cq_poll_info->bytes_xfered; +} + +/** + * irdma_get_cqes - get cq entries + * @num_entries: requested number of entries + * @cqe_count: received number of entries + * @ukcq: cq to get completion entries from + * @new_cqe: true, if at least one completion + * @entry: wr of a completed entry + */ +static int irdma_get_cqes(struct irdma_cq_uk *ukcq, + int num_entries, + int *cqe_count, + bool *new_cqe, + struct ib_wc **entry) +{ + struct irdma_cq_poll_info cq_poll_info; + int ret = 0; + + while (*cqe_count < num_entries) { + ret = ukcq->ops.iw_cq_poll_cmpl(ukcq, &cq_poll_info); + if (ret == IRDMA_ERR_Q_EMPTY) { + break; + } else if (ret == IRDMA_ERR_Q_DESTROYED) { + *new_cqe = true; + continue; + } else if (ret) { + if (!*cqe_count) + *cqe_count = -1; + return -EINVAL; + } + *new_cqe = true; + irdma_process_cqe(*entry, &cq_poll_info); + (*cqe_count)++; + (*entry)++; + } + + return 0; +} + +/** + * irdma_poll_cq - poll cq for completion (kernel apps) + * @ibcq: cq to poll + * @num_entries: number of entries to poll + * @entry: wr of a completed entry + */ +static int irdma_poll_cq(struct ib_cq *ibcq, int num_entries, + struct ib_wc *entry) +{ + struct list_head *tmp_node, *list_node; + struct irdma_cq_buf *last_buf = NULL; + struct irdma_cq_buf *cq_buf; + enum irdma_status_code ret; + struct irdma_device *iwdev; + struct irdma_cq_uk *ukcq; + struct irdma_cq *iwcq; + bool new_cqe = false; + int resized_bufs = 0; + unsigned long flags; + int cqe_count = 0; + + iwcq = to_iwcq(ibcq); + iwdev = to_iwdev(ibcq->device); + ukcq = &iwcq->sc_cq.cq_uk; + + spin_lock_irqsave(&iwcq->lock, flags); + /* go through the list of previously resized CQ buffers */ + list_for_each_safe(list_node, tmp_node, &iwcq->resize_list) { + bool last_cqe = false; + + cq_buf = container_of(list_node, struct irdma_cq_buf, list); + ret = irdma_get_cqes(&cq_buf->cq_uk, num_entries, &cqe_count, + &last_cqe, &entry); + if (ret) + goto exit; + + /* save the resized CQ buffer which has received the last cqe */ + if (last_cqe) + last_buf = cq_buf; + } + + /* check the current CQ buffer for new cqes */ + ret = irdma_get_cqes(ukcq, num_entries, &cqe_count, &new_cqe, &entry); + if (ret) + goto exit; + + if (new_cqe) + /* all previous CQ resizes are complete */ + resized_bufs = irdma_process_resize_list(iwcq, iwdev, NULL); + else if (last_buf) + /* only CQ resizes up to the last_buf are complete */ + resized_bufs = irdma_process_resize_list(iwcq, iwdev, last_buf); + if (resized_bufs) + /* report to the HW the number of complete CQ resizes */ + ukcq->ops.iw_cq_set_resized_cnt(ukcq, resized_bufs); + +exit: + spin_unlock_irqrestore(&iwcq->lock, flags); + + return cqe_count; +} + +/** + * irdma_req_notify_cq - arm cq kernel application + * @ibcq: cq to arm + * @notify_flags: notofication flags + */ +static int irdma_req_notify_cq(struct ib_cq *ibcq, + enum ib_cq_notify_flags notify_flags) +{ + struct irdma_cq *iwcq; + struct irdma_cq_uk *ukcq; + unsigned long flags; + enum irdma_cmpl_notify cq_notify = IRDMA_CQ_COMPL_EVENT; + + iwcq = to_iwcq(ibcq); + ukcq = &iwcq->sc_cq.cq_uk; + if (notify_flags == IB_CQ_SOLICITED) + cq_notify = IRDMA_CQ_COMPL_SOLICITED; + spin_lock_irqsave(&iwcq->lock, flags); + ukcq->ops.iw_cq_request_notification(ukcq, cq_notify); + spin_unlock_irqrestore(&iwcq->lock, flags); + + return 0; +} + +/** + * irdma_port_immutable - return port's immutable data + * @ibdev: ib dev struct + * @port_num: port number + * @immutable: immutable data for the port return + */ +static int irdma_port_immutable(struct ib_device *ibdev, u8 port_num, + struct ib_port_immutable *immutable) +{ + struct ib_port_attr attr; + int err; + struct irdma_device *iwdev = to_iwdev(ibdev); + + if (iwdev->roce_mode) { + immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; + immutable->max_mad_size = IB_MGMT_MAD_SIZE; + } else { + immutable->core_cap_flags = RDMA_CORE_PORT_IWARP; + } + err = ib_query_port(ibdev, port_num, &attr); + if (err) + return err; + + immutable->pkey_tbl_len = attr.pkey_tbl_len; + immutable->gid_tbl_len = attr.gid_tbl_len; + + return 0; +} + +static const char *const irdma_hw_stat_names[] = { + /* 32bit names */ + [IRDMA_HW_STAT_INDEX_RXVLANERR] = "rxVlanErrors", + [IRDMA_HW_STAT_INDEX_IP4RXDISCARD] = "ip4InDiscards", + [IRDMA_HW_STAT_INDEX_IP4RXTRUNC] = "ip4InTruncatedPkts", + [IRDMA_HW_STAT_INDEX_IP4TXNOROUTE] = "ip4OutNoRoutes", + [IRDMA_HW_STAT_INDEX_IP6RXDISCARD] = "ip6InDiscards", + [IRDMA_HW_STAT_INDEX_IP6RXTRUNC] = "ip6InTruncatedPkts", + [IRDMA_HW_STAT_INDEX_IP6TXNOROUTE] = "ip6OutNoRoutes", + [IRDMA_HW_STAT_INDEX_TCPRTXSEG] = "tcpRetransSegs", + [IRDMA_HW_STAT_INDEX_TCPRXOPTERR] = "tcpInOptErrors", + [IRDMA_HW_STAT_INDEX_TCPRXPROTOERR] = "tcpInProtoErrors", + [IRDMA_HW_STAT_INDEX_RXRPCNPHANDLED] = "cnpHandled", + [IRDMA_HW_STAT_INDEX_RXRPCNPIGNORED] = "cnpIgnored", + [IRDMA_HW_STAT_INDEX_TXNPCNPSENT] = "cnpSent", + + /* 64bit names */ + [IRDMA_HW_STAT_INDEX_IP4RXOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4InOctets", + [IRDMA_HW_STAT_INDEX_IP4RXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4InPkts", + [IRDMA_HW_STAT_INDEX_IP4RXFRAGS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4InReasmRqd", + [IRDMA_HW_STAT_INDEX_IP4RXMCOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4InMcastOctets", + [IRDMA_HW_STAT_INDEX_IP4RXMCPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4InMcastPkts", + [IRDMA_HW_STAT_INDEX_IP4TXOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4OutOctets", + [IRDMA_HW_STAT_INDEX_IP4TXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4OutPkts", + [IRDMA_HW_STAT_INDEX_IP4TXFRAGS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4OutSegRqd", + [IRDMA_HW_STAT_INDEX_IP4TXMCOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4OutMcastOctets", + [IRDMA_HW_STAT_INDEX_IP4TXMCPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip4OutMcastPkts", + [IRDMA_HW_STAT_INDEX_IP6RXOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6InOctets", + [IRDMA_HW_STAT_INDEX_IP6RXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6InPkts", + [IRDMA_HW_STAT_INDEX_IP6RXFRAGS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6InReasmRqd", + [IRDMA_HW_STAT_INDEX_IP6RXMCOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6InMcastOctets", + [IRDMA_HW_STAT_INDEX_IP6RXMCPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6InMcastPkts", + [IRDMA_HW_STAT_INDEX_IP6TXOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6OutOctets", + [IRDMA_HW_STAT_INDEX_IP6TXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6OutPkts", + [IRDMA_HW_STAT_INDEX_IP6TXFRAGS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6OutSegRqd", + [IRDMA_HW_STAT_INDEX_IP6TXMCOCTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6OutMcastOctets", + [IRDMA_HW_STAT_INDEX_IP6TXMCPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "ip6OutMcastPkts", + [IRDMA_HW_STAT_INDEX_TCPRXSEGS + IRDMA_HW_STAT_INDEX_MAX_32] = + "tcpInSegs", + [IRDMA_HW_STAT_INDEX_TCPTXSEG + IRDMA_HW_STAT_INDEX_MAX_32] = + "tcpOutSegs", + [IRDMA_HW_STAT_INDEX_RDMARXRDS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwInRdmaReads", + [IRDMA_HW_STAT_INDEX_RDMARXSNDS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwInRdmaSends", + [IRDMA_HW_STAT_INDEX_RDMARXWRS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwInRdmaWrites", + [IRDMA_HW_STAT_INDEX_RDMATXRDS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwOutRdmaReads", + [IRDMA_HW_STAT_INDEX_RDMATXSNDS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwOutRdmaSends", + [IRDMA_HW_STAT_INDEX_RDMATXWRS + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwOutRdmaWrites", + [IRDMA_HW_STAT_INDEX_RDMAVBND + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwRdmaBnd", + [IRDMA_HW_STAT_INDEX_RDMAVINV + IRDMA_HW_STAT_INDEX_MAX_32] = + "iwRdmaInv", + [IRDMA_HW_STAT_INDEX_UDPRXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "RxUDP", + [IRDMA_HW_STAT_INDEX_UDPTXPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "TxUDP", + [IRDMA_HW_STAT_INDEX_RXNPECNMARKEDPKTS + IRDMA_HW_STAT_INDEX_MAX_32] = + "RxECNMrkd", +}; + +static void irdma_get_dev_fw_str(struct ib_device *dev, char *str) +{ + struct irdma_device *iwdev = to_iwdev(dev); + + snprintf(str, IB_FW_VERSION_NAME_MAX, "%u.%u", + FW_MAJOR_VER(&iwdev->rf->sc_dev), + FW_MINOR_VER(&iwdev->rf->sc_dev)); +} + +/** + * irdma_alloc_hw_stats - Allocate a hw stats structure + * @ibdev: device pointer from stack + * @port_num: port number + */ +static struct rdma_hw_stats *irdma_alloc_hw_stats(struct ib_device *ibdev, + u8 port_num) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; + int num_counters = IRDMA_HW_STAT_INDEX_MAX_32 + + IRDMA_HW_STAT_INDEX_MAX_64; + unsigned long lifespan = RDMA_HW_STATS_DEFAULT_LIFESPAN; + + BUILD_BUG_ON(ARRAY_SIZE(irdma_hw_stat_names) != + (IRDMA_HW_STAT_INDEX_MAX_32 + IRDMA_HW_STAT_INDEX_MAX_64)); + + /* + * PFs get the default update lifespan, but VFs only update once + * per second + */ + if (!dev->privileged) + lifespan = 1000; + + return rdma_alloc_hw_stats_struct(irdma_hw_stat_names, num_counters, + lifespan); +} + +/** + * irdma_get_hw_stats - Populates the rdma_hw_stats structure + * @ibdev: device pointer from stack + * @stats: stats pointer from stack + * @port_num: port number + * @index: which hw counter the stack is requesting we update + */ +static int irdma_get_hw_stats(struct ib_device *ibdev, + struct rdma_hw_stats *stats, u8 port_num, + int index) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + struct irdma_dev_hw_stats *hw_stats = &iwdev->vsi.pestat->hw_stats; + + if (iwdev->rf->rdma_ver > IRDMA_GEN_1) + irdma_cqp_gather_stats_cmd(&iwdev->rf->sc_dev, iwdev->vsi.pestat, true); + + memcpy(&stats->value[0], hw_stats, sizeof(*hw_stats)); + + return stats->num_counters; +} + +/** + * irdma_query_gid - Query port GID + * @ibdev: device pointer from stack + * @port: port number + * @index: Entry index + * @gid: Global ID + */ +static int irdma_query_gid(struct ib_device *ibdev, u8 port, int index, + union ib_gid *gid) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + + memset(gid->raw, 0, sizeof(gid->raw)); + ether_addr_copy(gid->raw, iwdev->netdev->dev_addr); + + return 0; +} + +/** + * mcast_list_add - Add a new mcast item to list + * @rf: RDMA PCI function + * @new_elem: pointer to element to add + */ +static void mcast_list_add(struct irdma_pci_f *rf, + struct mc_table_list *new_elem) +{ + list_add(&new_elem->list, &rf->mc_qht_list.list); +} + +/** + * mcast_list_del - Remove an mcast item from list + * @mc_qht_elem: pointer to mcast table list element + */ +static void mcast_list_del(struct mc_table_list *mc_qht_elem) +{ + if (mc_qht_elem) + list_del(&mc_qht_elem->list); +} + +/** + * irdma_mcast_list_lookup_ip - Search mcast list for address + * @rf: RDMA PCI function + * @ip_mcast: pointer to mcast IP address + */ +static struct mc_table_list *mcast_list_lookup_ip(struct irdma_pci_f *rf, + u32 *ip_mcast) +{ + struct mc_table_list *mc_qht_el; + struct list_head *pos, *q; + + list_for_each_safe (pos, q, &rf->mc_qht_list.list) { + mc_qht_el = list_entry(pos, struct mc_table_list, list); + if (!memcmp(mc_qht_el->mc_info.dest_ip, ip_mcast, + sizeof(mc_qht_el->mc_info.dest_ip))) + return mc_qht_el; + } + + return NULL; +} + +/** + * irdma_mcast_cqp_op - perform a mcast cqp operation + * @iwdev: irdma device + * @mc_grp_ctx: mcast group info + * @op: operation + * + * returns error status + */ +static int irdma_mcast_cqp_op(struct irdma_device *iwdev, + struct irdma_mcast_grp_info *mc_grp_ctx, u8 op) +{ + struct cqp_cmds_info *cqp_info; + struct irdma_cqp_request *cqp_request; + enum irdma_status_code status; + + cqp_request = irdma_get_cqp_request(&iwdev->rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_request->info.in.u.mc_create.info = *mc_grp_ctx; + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = op; + cqp_info->post_sq = 1; + cqp_info->in.u.mc_create.scratch = (uintptr_t)cqp_request; + cqp_info->in.u.mc_create.cqp = &iwdev->rf->cqp.sc_cqp; + status = irdma_handle_cqp_op(iwdev->rf, cqp_request); + if (status) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP_%s failed\n", + (op == IRDMA_OP_MC_MODIFY) ? "MODIFY" : "CREATE"); + return -ENOMEM; + } + + return 0; +} + +/** + * irdma_mcast_mac - Get the multicast MAC for an IP address + * @ip_addr: IPv4 or IPv6 address + * @mac: pointer to result MAC address + * @ipv4: flag indicating IPv4 or IPv6 + * + */ +void irdma_mcast_mac(u32 *ip_addr, u8 *mac, bool ipv4) +{ + u8 *ip = (u8 *)ip_addr; + + if (ipv4) { + unsigned char mac4[ETH_ALEN] = {0x01, 0x00, 0x5E, 0x00, + 0x00, 0x00}; + + mac4[3] = ip[2] & 0x7F; + mac4[4] = ip[1]; + mac4[5] = ip[0]; + ether_addr_copy(mac, mac4); + } else { + unsigned char mac6[ETH_ALEN] = {0x33, 0x33, 0x00, 0x00, + 0x00, 0x00}; + + mac6[2] = ip[3]; + mac6[3] = ip[2]; + mac6[4] = ip[1]; + mac6[5] = ip[0]; + ether_addr_copy(mac, mac6); + } +} + +/** + * irdma_attach_mcast - attach a qp to a multicast group + * @ibqp: ptr to qp + * @ibgid: pointer to global ID + * @lid: local ID + * + * returns error status + */ +static int irdma_attach_mcast(struct ib_qp *ibqp, union ib_gid *ibgid, u16 lid) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_pci_f *rf = iwdev->rf; + struct mc_table_list *mc_qht_elem; + struct irdma_mcast_grp_ctx_entry_info mcg_info = {}; + unsigned long flags; + u32 ip_addr[4] = {}; + u32 mgn; + u32 no_mgs; + int ret = 0; + bool ipv4; + u16 vlan_id; + union { + struct sockaddr saddr; + struct sockaddr_in saddr_in; + struct sockaddr_in6 saddr_in6; + } sgid_addr; + unsigned char dmac[ETH_ALEN]; + + rdma_gid2ip(&sgid_addr.saddr, ibgid); + if (rdma_gid_attr_network_type(ibqp->av_sgid_attr) == + RDMA_NETWORK_IPV6) { + irdma_copy_ip_ntohl(ip_addr, + sgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32); + irdma_netdev_vlan_ipv6(ip_addr, &vlan_id, NULL); + ipv4 = false; + ibdev_dbg(to_ibdev(iwdev), + "VERBS: qp_id=%d, IP6address=%pI6\n", ibqp->qp_num, + ip_addr); + irdma_mcast_mac(ip_addr, dmac, false); + } else { + ip_addr[0] = ntohl(sgid_addr.saddr_in.sin_addr.s_addr); + ipv4 = true; + vlan_id = irdma_get_vlan_ipv4(ip_addr); + irdma_mcast_mac(ip_addr, dmac, true); + ibdev_dbg(to_ibdev(iwdev), + "VERBS: qp_id=%d, IP4address=%pI4, MAC=%pM\n", + ibqp->qp_num, ip_addr, dmac); + } + + spin_lock_irqsave(&rf->qh_list_lock, flags); + mc_qht_elem = mcast_list_lookup_ip(rf, ip_addr); + if (!mc_qht_elem) { + struct irdma_dma_mem *dma_mem_mc; + + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + mc_qht_elem = kzalloc(sizeof(*mc_qht_elem), GFP_KERNEL); + if (!mc_qht_elem) + return -ENOMEM; + + mc_qht_elem->mc_info.ipv4_valid = ipv4; + memcpy(mc_qht_elem->mc_info.dest_ip, ip_addr, + sizeof(mc_qht_elem->mc_info.dest_ip)); + ret = irdma_alloc_rsrc(rf, rf->allocated_mcgs, rf->max_mcg, + &mgn, &rf->next_mcg); + if (ret) { + kfree(mc_qht_elem); + return -ENOMEM; + } + + mc_qht_elem->mc_info.mgn = mgn; + dma_mem_mc = &mc_qht_elem->mc_grp_ctx.dma_mem_mc; + dma_mem_mc->size = ALIGN(sizeof(u64) * IRDMA_MAX_MGS_PER_CTX, + IRDMA_HW_PAGE_SIZE); + dma_mem_mc->va = dma_alloc_coherent(hw_to_dev(&rf->hw), + dma_mem_mc->size, + &dma_mem_mc->pa, + GFP_KERNEL); + if (!dma_mem_mc->va) { + irdma_free_rsrc(rf, rf->allocated_mcgs, mgn); + kfree(mc_qht_elem); + return -ENOMEM; + } + + mc_qht_elem->mc_grp_ctx.mg_id = (u16)mgn; + memcpy(mc_qht_elem->mc_grp_ctx.dest_ip_addr, ip_addr, + sizeof(mc_qht_elem->mc_grp_ctx.dest_ip_addr)); + mc_qht_elem->mc_grp_ctx.ipv4_valid = ipv4; + mc_qht_elem->mc_grp_ctx.vlan_id = vlan_id; + if (vlan_id < VLAN_N_VID) + mc_qht_elem->mc_grp_ctx.vlan_valid = true; + mc_qht_elem->mc_grp_ctx.hmc_fcn_id = iwdev->vsi.fcn_id; + ether_addr_copy(mc_qht_elem->mc_grp_ctx.dest_mac_addr, dmac); + + spin_lock_irqsave(&rf->qh_list_lock, flags); + mcast_list_add(rf, mc_qht_elem); + } else { + if (mc_qht_elem->mc_grp_ctx.no_of_mgs == + IRDMA_MAX_MGS_PER_CTX) { + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + return -ENOMEM; + } + } + + mcg_info.qp_id = iwqp->ibqp.qp_num; + no_mgs = mc_qht_elem->mc_grp_ctx.no_of_mgs; + rf->sc_dev.iw_uda_ops->mcast_grp_add(&mc_qht_elem->mc_grp_ctx, + &mcg_info); + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + + /* Only if there is a change do we need to modify or create */ + if (!no_mgs) { + ret = irdma_mcast_cqp_op(iwdev, &mc_qht_elem->mc_grp_ctx, + IRDMA_OP_MC_CREATE); + } else if (no_mgs != mc_qht_elem->mc_grp_ctx.no_of_mgs) { + ret = irdma_mcast_cqp_op(iwdev, &mc_qht_elem->mc_grp_ctx, + IRDMA_OP_MC_MODIFY); + } else { + return 0; + } + + if (ret) + goto error; + + return 0; + +error: + rf->sc_dev.iw_uda_ops->mcast_grp_del(&mc_qht_elem->mc_grp_ctx, + &mcg_info); + if (!mc_qht_elem->mc_grp_ctx.no_of_mgs) { + mcast_list_del(mc_qht_elem); + dma_free_coherent(hw_to_dev(&rf->hw), + mc_qht_elem->mc_grp_ctx.dma_mem_mc.size, + mc_qht_elem->mc_grp_ctx.dma_mem_mc.va, + mc_qht_elem->mc_grp_ctx.dma_mem_mc.pa); + mc_qht_elem->mc_grp_ctx.dma_mem_mc.va = NULL; + irdma_free_rsrc(rf, rf->allocated_mcgs, + mc_qht_elem->mc_grp_ctx.mg_id); + kfree(mc_qht_elem); + } + + return ret; +} + +/** + * irdma_detach_mcast - detach a qp from a multicast group + * @ibqp: ptr to qp + * @ibgid: pointer to global ID + * @lid: local ID + * + * returns error status + */ +static int irdma_detach_mcast(struct ib_qp *ibqp, union ib_gid *ibgid, u16 lid) +{ + struct irdma_qp *iwqp = to_iwqp(ibqp); + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_pci_f *rf = iwdev->rf; + u32 ip_addr[4] = {}; + struct mc_table_list *mc_qht_elem; + struct irdma_mcast_grp_ctx_entry_info mcg_info = {}; + int ret; + unsigned long flags; + union { + struct sockaddr saddr; + struct sockaddr_in saddr_in; + struct sockaddr_in6 saddr_in6; + } sgid_addr; + + rdma_gid2ip(&sgid_addr.saddr, ibgid); + if (rdma_gid_attr_network_type(ibqp->av_sgid_attr) == + RDMA_NETWORK_IPV6) + irdma_copy_ip_ntohl(ip_addr, + sgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32); + else + ip_addr[0] = ntohl(sgid_addr.saddr_in.sin_addr.s_addr); + + spin_lock_irqsave(&rf->qh_list_lock, flags); + mc_qht_elem = mcast_list_lookup_ip(rf, ip_addr); + if (!mc_qht_elem) { + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + ibdev_dbg(to_ibdev(iwdev), "VERBS: address not found MCG\n"); + return 0; + } + + mcg_info.qp_id = iwqp->ibqp.qp_num; + rf->sc_dev.iw_uda_ops->mcast_grp_del(&mc_qht_elem->mc_grp_ctx, + &mcg_info); + if (!mc_qht_elem->mc_grp_ctx.no_of_mgs) { + mcast_list_del(mc_qht_elem); + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + ret = irdma_mcast_cqp_op(iwdev, &mc_qht_elem->mc_grp_ctx, + IRDMA_OP_MC_DESTROY); + if (ret) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: failed MC_DESTROY MCG\n"); + spin_lock_irqsave(&rf->qh_list_lock, flags); + mcast_list_add(rf, mc_qht_elem); + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + return -EAGAIN; + } + + dma_free_coherent(hw_to_dev(&rf->hw), + mc_qht_elem->mc_grp_ctx.dma_mem_mc.size, + mc_qht_elem->mc_grp_ctx.dma_mem_mc.va, + mc_qht_elem->mc_grp_ctx.dma_mem_mc.pa); + mc_qht_elem->mc_grp_ctx.dma_mem_mc.va = NULL; + irdma_free_rsrc(rf, rf->allocated_mcgs, + mc_qht_elem->mc_grp_ctx.mg_id); + kfree(mc_qht_elem); + } else { + spin_unlock_irqrestore(&rf->qh_list_lock, flags); + ret = irdma_mcast_cqp_op(iwdev, &mc_qht_elem->mc_grp_ctx, + IRDMA_OP_MC_MODIFY); + if (ret) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: failed Modify MCG\n"); + return ret; + } + } + + return 0; +} + +/** + * irdma_create_ah - create address handle + * @ib_ah: address handle + * @attr: address handle attributes + * @flags: flags for sleepable + * @udata: User data + * + * returns a pointer to an address handle + */ +static int irdma_create_ah(struct ib_ah *ib_ah, + struct rdma_ah_attr *attr, u32 flags, + struct ib_udata *udata) +{ + struct irdma_pd *pd = to_iwpd(ib_ah->pd); + struct irdma_ah *ah = container_of(ib_ah, struct irdma_ah, ibah); + const struct ib_gid_attr *sgid_attr; + struct irdma_device *iwdev = to_iwdev(ib_ah->pd->device); + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_sc_ah *sc_ah; + u32 ah_id = 0; + struct irdma_ah_info *ah_info; + struct irdma_create_ah_resp uresp; + union { + struct sockaddr saddr; + struct sockaddr_in saddr_in; + struct sockaddr_in6 saddr_in6; + } sgid_addr, dgid_addr; + int err; + u8 dmac[ETH_ALEN]; + + err = irdma_alloc_rsrc(rf, rf->allocated_ahs, rf->max_ah, &ah_id, + &rf->next_ah); + if (err) + return err; + + ah->pd = pd; + sc_ah = &ah->sc_ah; + sc_ah->ah_info.ah_idx = ah_id; + sc_ah->ah_info.vsi = &iwdev->vsi; + iwdev->rf->sc_dev.iw_uda_ops->init_ah(&rf->sc_dev, sc_ah); + ah->sgid_index = attr->grh.sgid_index; + sgid_attr = attr->grh.sgid_attr; + memcpy(&ah->dgid, &attr->grh.dgid, sizeof(ah->dgid)); + rdma_gid2ip(&sgid_addr.saddr, &sgid_attr->gid); + rdma_gid2ip(&dgid_addr.saddr, &attr->grh.dgid); + ah->av.attrs = *attr; + ah->av.net_type = rdma_gid_attr_network_type(sgid_attr); + ah->av.sgid_addr.saddr = sgid_addr.saddr; + ah->av.dgid_addr.saddr = dgid_addr.saddr; + ah_info = &sc_ah->ah_info; + ah_info->ah = sc_ah; + ah_info->ah_idx = ah_id; + ah_info->pd_idx = pd->sc_pd.pd_id; + if (attr->ah_flags & IB_AH_GRH) { + ah_info->flow_label = attr->grh.flow_label; + ah_info->hop_ttl = attr->grh.hop_limit; + ah_info->tc_tos = attr->grh.traffic_class; + } + + ether_addr_copy(dmac, attr->roce.dmac); + if (rdma_gid_attr_network_type(sgid_attr) == RDMA_NETWORK_IPV4) { + ah_info->ipv4_valid = true; + ah_info->dest_ip_addr[0] = + ntohl(dgid_addr.saddr_in.sin_addr.s_addr); + ah_info->src_ip_addr[0] = + ntohl(sgid_addr.saddr_in.sin_addr.s_addr); + ah_info->do_lpbk = irdma_ipv4_is_lpb(ah_info->src_ip_addr[0], + ah_info->dest_ip_addr[0]); + if (ipv4_is_multicast(dgid_addr.saddr_in.sin_addr.s_addr)) + irdma_mcast_mac(ah_info->dest_ip_addr, dmac, true); + } else { + irdma_copy_ip_ntohl(ah_info->dest_ip_addr, + dgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32); + irdma_copy_ip_ntohl(ah_info->src_ip_addr, + sgid_addr.saddr_in6.sin6_addr.in6_u.u6_addr32); + ah_info->do_lpbk = irdma_ipv6_is_lpb(ah_info->src_ip_addr, + ah_info->dest_ip_addr); + if (rdma_is_multicast_addr(&dgid_addr.saddr_in6.sin6_addr)) + irdma_mcast_mac(ah_info->dest_ip_addr, dmac, false); + } + + err = rdma_read_gid_l2_fields(sgid_attr, &ah_info->vlan_tag, + ah_info->mac_addr); + if (err) + goto error; + + ah_info->dst_arpindex = irdma_add_arp(iwdev->rf, ah_info->dest_ip_addr, + ah_info->ipv4_valid, dmac); + + if (ah_info->dst_arpindex == -1) { + err = -EINVAL; + goto error; + } + + if (ah_info->vlan_tag >= VLAN_N_VID && iwdev->dcb) + ah_info->vlan_tag = 0; + + if (ah_info->vlan_tag < VLAN_N_VID) { + ah_info->insert_vlan_tag = true; + ah_info->vlan_tag |= + rt_tos2priority(ah_info->tc_tos) << VLAN_PRIO_SHIFT; + } + + err = irdma_ah_cqp_op(iwdev->rf, sc_ah, IRDMA_OP_AH_CREATE, + flags & RDMA_CREATE_AH_SLEEPABLE, + irdma_gsi_ud_qp_ah_cb, sc_ah); + if (err) { + ibdev_dbg(to_ibdev(iwdev), "VERBS: CQP-OP Create AH fail"); + goto error; + } + + if (!(flags & RDMA_CREATE_AH_SLEEPABLE)) { + int cnt = CQP_COMPL_WAIT_TIME_MS * CQP_TIMEOUT_THRESHOLD; + + do { + irdma_cqp_ce_handler(rf, &rf->ccq.sc_cq); + mdelay(1); + } while (!sc_ah->ah_info.ah_valid && --cnt); + + if (!cnt) { + ibdev_dbg(to_ibdev(iwdev), + "VERBS: CQP create AH timed out"); + err = -ETIMEDOUT; + goto error; + } + } + + if (udata) { + uresp.ah_id = ah->sc_ah.ah_info.ah_idx; + err = ib_copy_to_udata(udata, &uresp, + min(sizeof(uresp), udata->outlen)); + } + return 0; + +error: + irdma_free_rsrc(iwdev->rf, iwdev->rf->allocated_ahs, ah_id); + + return err; +} + +/** + * irdma_destroy_ah - Destroy address handle + * @ibah: pointer to address handle + * @flags: flags for sleepable + */ +static void irdma_destroy_ah(struct ib_ah *ibah, u32 flags) +{ + struct irdma_device *iwdev = to_iwdev(ibah->device); + struct irdma_ah *ah = to_iwah(ibah); + + irdma_ah_cqp_op(iwdev->rf, &ah->sc_ah, IRDMA_OP_AH_DESTROY, + false, NULL, ah); + + irdma_free_rsrc(iwdev->rf, iwdev->rf->allocated_ahs, + ah->sc_ah.ah_info.ah_idx); +} + +/** + * irdma_query_ah - Query address handle + * @ibah: pointer to address handle + * @ah_attr: address handle attributes + */ +static int irdma_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) +{ + struct irdma_ah *ah = to_iwah(ibah); + + memset(ah_attr, 0, sizeof(*ah_attr)); + if (ah->av.attrs.ah_flags & IB_AH_GRH) { + ah_attr->ah_flags = IB_AH_GRH; + ah_attr->grh.flow_label = ah->sc_ah.ah_info.flow_label; + ah_attr->grh.traffic_class = ah->sc_ah.ah_info.tc_tos; + ah_attr->grh.hop_limit = ah->sc_ah.ah_info.hop_ttl; + ah_attr->grh.sgid_index = ah->sgid_index; + ah_attr->grh.sgid_index = ah->sgid_index; + memcpy(&ah_attr->grh.dgid, &ah->dgid, + sizeof(ah_attr->grh.dgid)); + } + + return 0; +} + +static enum rdma_link_layer irdma_get_link_layer(struct ib_device *ibdev, + u8 port_num) +{ + return IB_LINK_LAYER_ETHERNET; +} + +static __be64 irdma_mac_to_guid(struct net_device *ndev) +{ + unsigned char *mac = ndev->dev_addr; + __be64 guid; + unsigned char *dst = (unsigned char *)&guid; + + dst[0] = mac[0] ^ 2; + dst[1] = mac[1]; + dst[2] = mac[2]; + dst[3] = 0xff; + dst[4] = 0xfe; + dst[5] = mac[3]; + dst[6] = mac[4]; + dst[7] = mac[5]; + + return guid; +} + +static const struct ib_device_ops irdma_roce_dev_ops = { + .attach_mcast = irdma_attach_mcast, + .detach_mcast = irdma_detach_mcast, + .get_link_layer = irdma_get_link_layer, + .modify_qp = irdma_modify_qp_roce, + .query_ah = irdma_query_ah, +}; + +static const struct ib_device_ops irdma_iw_dev_ops = { + .modify_qp = irdma_modify_qp, + .query_gid = irdma_query_gid, +}; + +static const struct ib_device_ops irdma_dev_ops = { + .owner = THIS_MODULE, + .driver_id = RDMA_DRIVER_IRDMA, + .uverbs_abi_ver = IRDMA_ABI_VER, + + .alloc_hw_stats = irdma_alloc_hw_stats, + .alloc_mr = irdma_alloc_mr, + .alloc_mw = irdma_alloc_mw, + .alloc_pd = irdma_alloc_pd, + .alloc_ucontext = irdma_alloc_ucontext, + .create_ah = irdma_create_ah, + .create_cq = irdma_create_cq, + .create_qp = irdma_create_qp, + .dealloc_driver = irdma_ib_dealloc_device, + .dealloc_mw = irdma_dealloc_mw, + .dealloc_pd = irdma_dealloc_pd, + .dealloc_ucontext = irdma_dealloc_ucontext, + .dereg_mr = irdma_dereg_mr, + .destroy_ah = irdma_destroy_ah, + .destroy_cq = irdma_destroy_cq, + .destroy_qp = irdma_destroy_qp, + .disassociate_ucontext = irdma_disassociate_ucontext, + .drain_rq = irdma_drain_rq, + .drain_sq = irdma_drain_sq, + .get_dev_fw_str = irdma_get_dev_fw_str, + .get_dma_mr = irdma_get_dma_mr, + .get_hw_stats = irdma_get_hw_stats, + .get_port_immutable = irdma_port_immutable, + .map_mr_sg = irdma_map_mr_sg, + .mmap = irdma_mmap, + .mmap_free = irdma_mmap_free, + .poll_cq = irdma_poll_cq, + .post_recv = irdma_post_recv, + .post_send = irdma_post_send, + .query_device = irdma_query_device, + .query_pkey = irdma_query_pkey, + .query_port = irdma_query_port, + .query_qp = irdma_query_qp, + .reg_user_mr = irdma_reg_user_mr, + .req_notify_cq = irdma_req_notify_cq, + .resize_cq = irdma_resize_cq, + INIT_RDMA_OBJ_SIZE(ib_pd, irdma_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_ucontext, irdma_ucontext, ibucontext), + INIT_RDMA_OBJ_SIZE(ib_ah, irdma_ah, ibah), + INIT_RDMA_OBJ_SIZE(ib_cq, irdma_cq, ibcq), +}; + +/** + * irdma_init_roce_device - initialization of roce rdma device + * @iwdev: irdma device + */ +static void irdma_init_roce_device(struct irdma_device *iwdev) +{ + iwdev->ibdev.uverbs_cmd_mask |= + (1ull << IB_USER_VERBS_CMD_ATTACH_MCAST) | + (1ull << IB_USER_VERBS_CMD_DETACH_MCAST); + + iwdev->ibdev.node_type = RDMA_NODE_IB_CA; + iwdev->ibdev.node_guid = irdma_mac_to_guid(iwdev->netdev); + ib_set_device_ops(&iwdev->ibdev, &irdma_roce_dev_ops); +} + +/** + * irdma_init_roce_device - initialization of iwarp rdma device + * @iwdev: irdma device + */ +static int irdma_init_iw_device(struct irdma_device *iwdev) +{ + struct net_device *netdev = iwdev->netdev; + + iwdev->ibdev.node_type = RDMA_NODE_RNIC; + ether_addr_copy((u8 *)&iwdev->ibdev.node_guid, netdev->dev_addr); + iwdev->ibdev.ops.iw_add_ref = irdma_add_ref; + iwdev->ibdev.ops.iw_rem_ref = irdma_rem_ref; + iwdev->ibdev.ops.iw_get_qp = irdma_get_qp; + iwdev->ibdev.ops.iw_connect = irdma_connect; + iwdev->ibdev.ops.iw_accept = irdma_accept; + iwdev->ibdev.ops.iw_reject = irdma_reject; + iwdev->ibdev.ops.iw_create_listen = irdma_create_listen; + iwdev->ibdev.ops.iw_destroy_listen = irdma_destroy_listen; + memcpy(iwdev->ibdev.iw_ifname, netdev->name, + sizeof(iwdev->ibdev.iw_ifname)); + ib_set_device_ops(&iwdev->ibdev, &irdma_iw_dev_ops); + + return 0; +} + +/** + * irdma_init_rdma_device - initialization of rdma device + * @iwdev: irdma device + */ +static int irdma_init_rdma_device(struct irdma_device *iwdev) +{ + struct pci_dev *pcidev = iwdev->rf->hw.pdev; + int ret; + + iwdev->ibdev.uverbs_cmd_mask = + (1ull << IB_USER_VERBS_CMD_GET_CONTEXT) | + (1ull << IB_USER_VERBS_CMD_QUERY_DEVICE) | + (1ull << IB_USER_VERBS_CMD_QUERY_PORT) | + (1ull << IB_USER_VERBS_CMD_ALLOC_PD) | + (1ull << IB_USER_VERBS_CMD_DEALLOC_PD) | + (1ull << IB_USER_VERBS_CMD_REG_MR) | + (1ull << IB_USER_VERBS_CMD_DEREG_MR) | + (1ull << IB_USER_VERBS_CMD_CREATE_COMP_CHANNEL) | + (1ull << IB_USER_VERBS_CMD_CREATE_CQ) | + (1ull << IB_USER_VERBS_CMD_RESIZE_CQ) | + (1ull << IB_USER_VERBS_CMD_DESTROY_CQ) | + (1ull << IB_USER_VERBS_CMD_REQ_NOTIFY_CQ) | + (1ull << IB_USER_VERBS_CMD_CREATE_QP) | + (1ull << IB_USER_VERBS_CMD_MODIFY_QP) | + (1ull << IB_USER_VERBS_CMD_QUERY_QP) | + (1ull << IB_USER_VERBS_CMD_POLL_CQ) | + (1ull << IB_USER_VERBS_CMD_CREATE_AH) | + (1ull << IB_USER_VERBS_CMD_DESTROY_AH) | + (1ull << IB_USER_VERBS_CMD_DESTROY_QP) | + (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | + (1ull << IB_USER_VERBS_CMD_BIND_MW) | + (1ull << IB_USER_VERBS_CMD_DEALLOC_MW) | + (1ull << IB_USER_VERBS_CMD_POST_RECV) | + (1ull << IB_USER_VERBS_CMD_POST_SEND); + iwdev->ibdev.uverbs_ex_cmd_mask = + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP); + + if (iwdev->roce_mode) { + irdma_init_roce_device(iwdev); + } else { + ret = irdma_init_iw_device(iwdev); + if (ret) + return ret; + } + iwdev->ibdev.phys_port_cnt = 1; + iwdev->ibdev.num_comp_vectors = iwdev->rf->ceqs_count; + iwdev->ibdev.dev.parent = &pcidev->dev; + ib_set_device_ops(&iwdev->ibdev, &irdma_dev_ops); + + return 0; +} + +/** + * irdma_port_ibevent - indicate port event + * @iwdev: irdma device + */ +void irdma_port_ibevent(struct irdma_device *iwdev) +{ + struct ib_event event; + + event.device = &iwdev->ibdev; + event.element.port_num = 1; + event.event = + iwdev->iw_status ? IB_EVENT_PORT_ACTIVE : IB_EVENT_PORT_ERR; + ib_dispatch_event(&event); +} + +/** + * irdma_ib_unregister_rdma_device - unregister rdma device from IB core + * @iwdev: irdma device + */ +void irdma_ib_unregister_device(struct irdma_device *iwdev) +{ + iwdev->iw_status = 0; + irdma_port_ibevent(iwdev); + ib_unregister_device(&iwdev->ibdev); +} + +/** + * irdma_ib_register_device - register irdma device to IB core + * @iwdev: irdma device + */ +int irdma_ib_register_device(struct irdma_device *iwdev) +{ + int ret; + + ret = irdma_init_rdma_device(iwdev); + if (ret) + return ret; + + ret = ib_device_set_netdev(&iwdev->ibdev, iwdev->netdev, 1); + if (ret) + goto error; + ret = ib_register_device(&iwdev->ibdev, "irdma%d"); + if (ret) + goto error; + + iwdev->iw_status = 1; + irdma_port_ibevent(iwdev); + + return 0; + +error: + if (ret) + dev_dbg(rfdev_to_dev(&iwdev->rf->sc_dev), + "VERBS: Register RDMA device fail\n"); + + return ret; +} + +/** + * irdma_get_device - find a iwdev given a netdev + * @netdev: pointer to net_device + * + * This function takes a reference on ibdev and prevents ib + * device deregistration. The caller must call a matching + * irdma_put_device. + */ +struct irdma_device *irdma_get_device(struct net_device *netdev) +{ + struct ib_device *ibdev = ib_device_get_by_netdev(netdev, + RDMA_DRIVER_IRDMA); + + if (!ibdev) + return NULL; + + return to_iwdev(ibdev); +} + +/** + * irdma_put_device - release ibdev refcnt + * @iwdev: irdma device + * + * release refcnt on ibdev taken with irdma_get_device. + */ +void irdma_put_device(struct irdma_device *iwdev) +{ + struct ib_device *ibdev = &iwdev->ibdev; + + ib_device_put(ibdev); +} + +/** + * irdma_ib_dealloc_device + * @ibdev: ib device + * + * callback from ibdev dealloc_driver to deallocate resources + * unber irdma device + */ +void irdma_ib_dealloc_device(struct ib_device *ibdev) +{ + struct irdma_device *iwdev = to_iwdev(ibdev); + + irdma_rt_deinit_hw(iwdev); +} diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h new file mode 100644 index 000000000000..2746c833f888 --- /dev/null +++ b/drivers/infiniband/hw/irdma/verbs.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_VERBS_H +#define IRDMA_VERBS_H + +#define IRDMA_MAX_SAVED_PHY_PGADDR 4 + +#define IRDMA_PKEY_TBL_SZ 1 +#define IRDMA_DEFAULT_PKEY 0xFFFF + +struct irdma_ucontext { + struct ib_ucontext ibucontext; + struct irdma_device *iwdev; + struct rdma_user_mmap_entry *db_mmap_entry; + struct list_head cq_reg_mem_list; + spinlock_t cq_reg_mem_list_lock; /* protect CQ memory list */ + struct list_head qp_reg_mem_list; + spinlock_t qp_reg_mem_list_lock; /* protect QP memory list */ + int abi_ver; +}; + +struct irdma_pd { + struct ib_pd ibpd; + struct irdma_sc_pd sc_pd; +}; + +struct irdma_av { + u8 macaddr[16]; + struct rdma_ah_attr attrs; + union { + struct sockaddr saddr; + struct sockaddr_in saddr_in; + struct sockaddr_in6 saddr_in6; + } sgid_addr, dgid_addr; + u8 net_type; +}; + +struct irdma_ah { + struct ib_ah ibah; + struct irdma_sc_ah sc_ah; + struct irdma_pd *pd; + struct irdma_av av; + u8 sgid_index; + union ib_gid dgid; +}; + +struct irdma_hmc_pble { + union { + u32 idx; + dma_addr_t addr; + }; +}; + +struct irdma_cq_mr { + struct irdma_hmc_pble cq_pbl; + dma_addr_t shadow; + bool split; +}; + +struct irdma_qp_mr { + struct irdma_hmc_pble sq_pbl; + struct irdma_hmc_pble rq_pbl; + dma_addr_t shadow; + struct page *sq_page; +}; + +struct irdma_cq_buf { + struct irdma_dma_mem kmem_buf; + struct irdma_cq_uk cq_uk; + struct irdma_hw *hw; + struct list_head list; + struct work_struct work; +}; + +struct irdma_pbl { + struct list_head list; + union { + struct irdma_qp_mr qp_mr; + struct irdma_cq_mr cq_mr; + }; + + bool pbl_allocated:1; + bool on_list:1; + u64 user_base; + struct irdma_pble_alloc pble_alloc; + struct irdma_mr *iwmr; +}; + +struct irdma_mr { + union { + struct ib_mr ibmr; + struct ib_mw ibmw; + struct ib_fmr ibfmr; + }; + struct ib_umem *region; + u16 type; + u32 page_cnt; + u64 page_size; + u32 npages; + u32 stag; + u64 len; + u64 pgaddrmem[IRDMA_MAX_SAVED_PHY_PGADDR]; + struct irdma_pbl iwpbl; +}; + +struct irdma_cq { + struct ib_cq ibcq; + struct irdma_sc_cq sc_cq; + u16 cq_head; + u16 cq_size; + u16 cq_num; + bool user_mode; + u32 polled_cmpls; + u32 cq_mem_size; + struct irdma_dma_mem kmem; + struct irdma_dma_mem kmem_shadow; + spinlock_t lock; /* for poll cq */ + struct irdma_pbl *iwpbl; + struct irdma_pbl *iwpbl_shadow; + struct list_head resize_list; +}; + +struct disconn_work { + struct work_struct work; + struct irdma_qp *iwqp; +}; + +struct iw_cm_id; + +struct irdma_qp_kmode { + struct irdma_dma_mem dma_mem; + struct irdma_sq_uk_wr_trk_info *sq_wrid_mem; + u64 *rq_wrid_mem; +}; + +struct irdma_qp { + struct ib_qp ibqp; + struct irdma_sc_qp sc_qp; + struct irdma_device *iwdev; + struct irdma_cq *iwscq; + struct irdma_cq *iwrcq; + struct irdma_pd *iwpd; + struct rdma_user_mmap_entry *push_wqe_mmap_entry; + struct rdma_user_mmap_entry *push_db_mmap_entry; + struct irdma_qp_host_ctx_info ctx_info; + union { + struct irdma_iwarp_offload_info iwarp_info; + struct irdma_roce_offload_info roce_info; + }; + + union { + struct irdma_tcp_offload_info tcp_info; + struct irdma_udp_offload_info udp_info; + }; + + struct irdma_ah roce_ah; + struct list_head teardown_entry; + refcount_t refcnt; + struct iw_cm_id *cm_id; + void *cm_node; + struct ib_mr *lsmm_mr; + struct work_struct work; + atomic_t hw_mod_qp_pend; + enum ib_qp_state ibqp_state; + u32 qp_mem_size; + u32 last_aeq; + int max_send_wr; + int max_recv_wr; + atomic_t close_timer_started; + spinlock_t lock; /* serialize posting WRs to SQ/RQ */ + struct irdma_qp_context *iwqp_context; + void *pbl_vbase; + dma_addr_t pbl_pbase; + struct page *page; + u8 active_conn : 1; + u8 user_mode : 1; + u8 hte_added : 1; + u8 flush_issued : 1; + u8 destroyed : 1; + u8 sig_all : 1; + u8 pau_mode : 1; + u8 rsvd : 1; + u8 iwarp_state; + u16 term_sq_flush_code; + u16 term_rq_flush_code; + u8 hw_iwarp_state; + u8 hw_tcp_state; + struct irdma_qp_kmode kqp; + struct irdma_dma_mem host_ctx; + struct timer_list terminate_timer; + struct irdma_pbl *iwpbl; + struct irdma_dma_mem q2_ctx_mem; + struct irdma_dma_mem ietf_mem; + struct completion sq_drained; + struct completion rq_drained; + wait_queue_head_t waitq; + wait_queue_head_t mod_qp_waitq; + u8 rts_ae_rcvd; +}; + +struct irdma_user_mmap_entry { + struct rdma_user_mmap_entry rdma_entry; + u64 bar_offset; + u8 mmap_flag; +}; + +void irdma_mcast_mac(u32 *ip_addr, u8 *mac, bool ipv4); +int irdma_ib_register_device(struct irdma_device *iwdev); +void irdma_ib_unregister_device(struct irdma_device *iwdev); +void irdma_ib_dealloc_device(struct ib_device *ibdev); +struct irdma_device *irdma_get_device(struct net_device *netdev); +void irdma_put_device(struct irdma_device *iwdev); +#endif /* IRDMA_VERBS_H */ diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index a640bb814be0..a1f82731bd10 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -196,6 +196,7 @@ enum rdma_driver_id { RDMA_DRIVER_OCRDMA, RDMA_DRIVER_NES, RDMA_DRIVER_I40IW, + RDMA_DRIVER_IRDMA = RDMA_DRIVER_I40IW, RDMA_DRIVER_VMW_PVRDMA, RDMA_DRIVER_QEDR, RDMA_DRIVER_HNS, From patchwork Fri Apr 17 17:12:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07754C3815B for ; Fri, 17 Apr 2020 17:13:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF8E82076D for ; Fri, 17 Apr 2020 17:13:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729481AbgDQRNT (ORCPT ); Fri, 17 Apr 2020 13:13:19 -0400 Received: from mga01.intel.com ([192.55.52.88]:30122 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729440AbgDQRNS (ORCPT ); Fri, 17 Apr 2020 13:13:18 -0400 IronPort-SDR: y7DFSBzGX1p5WT2s119yfNaeEeodyH+umcQtLSaZ4HZKoQN5CuByW8WT0RyG8qLDxyoBWBP8He ctDVkl9138wQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:57 -0700 IronPort-SDR: dUlFi3dDd89kJZg0HQ7knO3APx8P8QTtclgqcp4BZo/koZgKUogn9Q0QcfbZ7Ly3F/Gtx+RsIr edwCUQ35G5kg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383737" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:56 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 10/16] RDMA/irdma: Add RoCEv2 UD OP support Date: Fri, 17 Apr 2020 10:12:45 -0700 Message-Id: <20200417171251.1533371-11-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Add the header, data structures and functions to populate the WQE descriptors and issue the Control QP commands that support RoCEv2 UD operations. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/uda.c | 390 ++++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/uda.h | 64 +++++ drivers/infiniband/hw/irdma/uda_d.h | 382 +++++++++++++++++++++++++++ 3 files changed, 836 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/uda.c create mode 100644 drivers/infiniband/hw/irdma/uda.h create mode 100644 drivers/infiniband/hw/irdma/uda_d.h diff --git a/drivers/infiniband/hw/irdma/uda.c b/drivers/infiniband/hw/irdma/uda.c new file mode 100644 index 000000000000..08c9f486491e --- /dev/null +++ b/drivers/infiniband/hw/irdma/uda.c @@ -0,0 +1,390 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2019 Intel Corporation */ +#include "osdep.h" +#include "status.h" +#include "hmc.h" +#include "defs.h" +#include "type.h" +#include "protos.h" +#include "uda.h" +#include "uda_d.h" + +/** + * irdma_sc_ah_init - initialize sc ah struct + * @dev: sc device struct + * @ah: sc ah ptr + */ +static void irdma_sc_init_ah(struct irdma_sc_dev *dev, struct irdma_sc_ah *ah) +{ + ah->dev = dev; +} + +/** + * irdma_sc_access_ah() - Create, modify or delete AH + * @cqp: struct for cqp hw + * @info: ah information + * @op: Operation + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_access_ah(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u32 op, u64 scratch) +{ + __le64 *wqe; + u64 qw1, qw2; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return IRDMA_ERR_RING_FULL; + + set_64bit_val(wqe, 0, LS_64_1(info->mac_addr[5], 16) | + LS_64_1(info->mac_addr[4], 24) | + LS_64_1(info->mac_addr[3], 32) | + LS_64_1(info->mac_addr[2], 40) | + LS_64_1(info->mac_addr[1], 48) | + LS_64_1(info->mac_addr[0], 56)); + + qw1 = LS_64(info->pd_idx, IRDMA_UDA_CQPSQ_MAV_PDINDEXLO) | + LS_64(info->tc_tos, IRDMA_UDA_CQPSQ_MAV_TC) | + LS_64(info->vlan_tag, IRDMA_UDAQPC_VLANTAG); + + qw2 = LS_64(info->dst_arpindex, IRDMA_UDA_CQPSQ_MAV_ARPINDEX) | + LS_64(info->flow_label, IRDMA_UDA_CQPSQ_MAV_FLOWLABEL) | + LS_64(info->hop_ttl, IRDMA_UDA_CQPSQ_MAV_HOPLIMIT) | + LS_64(info->pd_idx >> 16, IRDMA_UDA_CQPSQ_MAV_PDINDEXHI); + + if (!info->ipv4_valid) { + set_64bit_val(wqe, 40, + LS_64(info->dest_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR0) | + LS_64(info->dest_ip_addr[1], IRDMA_UDA_CQPSQ_MAV_ADDR1)); + set_64bit_val(wqe, 32, + LS_64(info->dest_ip_addr[2], IRDMA_UDA_CQPSQ_MAV_ADDR2) | + LS_64(info->dest_ip_addr[3], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + + set_64bit_val(wqe, 56, + LS_64(info->src_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR0) | + LS_64(info->src_ip_addr[1], IRDMA_UDA_CQPSQ_MAV_ADDR1)); + set_64bit_val(wqe, 48, + LS_64(info->src_ip_addr[2], IRDMA_UDA_CQPSQ_MAV_ADDR2) | + LS_64(info->src_ip_addr[3], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + } else { + set_64bit_val(wqe, 32, + LS_64(info->dest_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + + set_64bit_val(wqe, 48, + LS_64(info->src_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + } + + set_64bit_val(wqe, 8, qw1); + set_64bit_val(wqe, 16, qw2); + + dma_wmb(); /* need write block before writing WQE header */ + + set_64bit_val( + wqe, 24, + LS_64(cqp->polarity, IRDMA_UDA_CQPSQ_MAV_WQEVALID) | + LS_64(op, IRDMA_UDA_CQPSQ_MAV_OPCODE) | + LS_64(info->do_lpbk, IRDMA_UDA_CQPSQ_MAV_DOLOOPBACKK) | + LS_64(info->ipv4_valid, IRDMA_UDA_CQPSQ_MAV_IPV4VALID) | + LS_64(info->ah_idx, IRDMA_UDA_CQPSQ_MAV_AVIDX) | + LS_64(info->insert_vlan_tag, + IRDMA_UDA_CQPSQ_MAV_INSERTVLANTAG)); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_AH WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_create_ah() - Create AH + * @cqp: struct for cqp hw + * @info: ah information + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_create_ah(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch) +{ + return irdma_sc_access_ah(cqp, info, IRDMA_CQP_OP_CREATE_ADDR_HANDLE, + scratch); +} + +/** + * irdma_sc_modify_ah() - Modify AH + * @cqp: struct for cqp hw + * @info: ah information + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_modify_ah(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch) +{ + return irdma_sc_access_ah(cqp, info, IRDMA_CQP_OP_MODIFY_ADDR_HANDLE, + scratch); +} + +/** + * irdma_sc_destroy_ah() - Delete AH + * @cqp: struct for cqp hw + * @info: ah information + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code irdma_sc_destroy_ah(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch) +{ + return irdma_sc_access_ah(cqp, info, IRDMA_CQP_OP_DESTROY_ADDR_HANDLE, + scratch); +} + +/** + * create_mg_ctx() - create a mcg context + * @info: multicast group context info + */ +static enum irdma_status_code +irdma_create_mg_ctx(struct irdma_mcast_grp_info *info) +{ + struct irdma_mcast_grp_ctx_entry_info *entry_info = NULL; + u8 idx = 0; /* index in the array */ + u8 ctx_idx = 0; /* index in the MG context */ + + memset(info->dma_mem_mc.va, 0, IRDMA_MAX_MGS_PER_CTX * sizeof(u64)); + + for (idx = 0; idx < IRDMA_MAX_MGS_PER_CTX; idx++) { + entry_info = &info->mg_ctx_info[idx]; + if (entry_info->valid_entry) { + set_64bit_val((__le64 *)info->dma_mem_mc.va, + ctx_idx * sizeof(u64), + LS_64(entry_info->dest_port, IRDMA_UDA_MGCTX_DESTPORT) | + LS_64(entry_info->valid_entry, IRDMA_UDA_MGCTX_VALIDENT) | + LS_64(entry_info->qp_id, IRDMA_UDA_MGCTX_QPID)); + ctx_idx++; + } + } + + return 0; +} + +/** + * irdma_access_mcast_grp() - Access mcast group based on op + * @cqp: Control QP + * @info: multicast group context info + * @op: operation to perform + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_access_mcast_grp(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, u32 op, u64 scratch) +{ + __le64 *wqe; + enum irdma_status_code ret_code = 0; + + if (info->mg_id >= IRDMA_UDA_MAX_FSI_MGS) { + dev_dbg(rfdev_to_dev(cqp->dev), "WQE: mg_id out of range\n"); + return IRDMA_ERR_PARAM; + } + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) { + dev_dbg(rfdev_to_dev(cqp->dev), "WQE: ring full\n"); + return IRDMA_ERR_RING_FULL; + } + + ret_code = irdma_create_mg_ctx(info); + if (ret_code) + return ret_code; + + set_64bit_val(wqe, 32, info->dma_mem_mc.pa); + set_64bit_val(wqe, 16, + LS_64(info->vlan_id, IRDMA_UDA_CQPSQ_MG_VLANID) | + LS_64(info->qs_handle, IRDMA_UDA_CQPSQ_QS_HANDLE)); + set_64bit_val(wqe, 0, LS_64_1(info->dest_mac_addr[5], 0) | + LS_64_1(info->dest_mac_addr[4], 8) | + LS_64_1(info->dest_mac_addr[3], 16) | + LS_64_1(info->dest_mac_addr[2], 24) | + LS_64_1(info->dest_mac_addr[1], 32) | + LS_64_1(info->dest_mac_addr[0], 40)); + set_64bit_val(wqe, 8, + LS_64(info->hmc_fcn_id, IRDMA_UDA_CQPSQ_MG_HMC_FCN_ID)); + + if (!info->ipv4_valid) { + set_64bit_val(wqe, 56, + LS_64(info->dest_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR0) | + LS_64(info->dest_ip_addr[1], IRDMA_UDA_CQPSQ_MAV_ADDR1)); + set_64bit_val(wqe, 48, + LS_64(info->dest_ip_addr[2], IRDMA_UDA_CQPSQ_MAV_ADDR2) | + LS_64(info->dest_ip_addr[3], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + } else { + set_64bit_val(wqe, 48, + LS_64(info->dest_ip_addr[0], IRDMA_UDA_CQPSQ_MAV_ADDR3)); + } + + dma_wmb(); /* need write memory block before writing the WQE header. */ + + set_64bit_val(wqe, 24, + LS_64(cqp->polarity, IRDMA_UDA_CQPSQ_MG_WQEVALID) | + LS_64(op, IRDMA_UDA_CQPSQ_MG_OPCODE) | + LS_64(info->mg_id, IRDMA_UDA_CQPSQ_MG_MGIDX) | + LS_64(info->vlan_valid, IRDMA_UDA_CQPSQ_MG_VLANVALID) | + LS_64(info->ipv4_valid, IRDMA_UDA_CQPSQ_MG_IPV4VALID)); + + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MANAGE_MCG WQE", wqe, + IRDMA_CQP_WQE_SIZE * 8); + irdma_debug_buf(cqp->dev, IRDMA_DEBUG_WQE, "MCG_HOST CTX WQE", + info->dma_mem_mc.va, IRDMA_MAX_MGS_PER_CTX * 8); + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_create_mcast_grp() - Create mcast group. + * @cqp: Control QP + * @info: multicast group context info + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_create_mcast_grp(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, u64 scratch) +{ + return irdma_access_mcast_grp(cqp, info, IRDMA_CQP_OP_CREATE_MCAST_GRP, + scratch); +} + +/** + * irdma_sc_modify_mcast_grp() - Modify mcast group + * @cqp: Control QP + * @info: multicast group context info + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_modify_mcast_grp(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, u64 scratch) +{ + return irdma_access_mcast_grp(cqp, info, IRDMA_CQP_OP_MODIFY_MCAST_GRP, + scratch); +} + +/** + * irdma_sc_destroy_mcast_grp() - Destroys mcast group + * @cqp: Control QP + * @info: multicast group context info + * @scratch: u64 saved to be used during cqp completion + */ +static enum irdma_status_code +irdma_sc_destroy_mcast_grp(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, u64 scratch) +{ + return irdma_access_mcast_grp(cqp, info, IRDMA_CQP_OP_DESTROY_MCAST_GRP, + scratch); +} + +/** + * irdma_compare_mgs - Compares two multicast group structures + * @entry1: Multcast group info + * @entry2: Multcast group info in context + */ +static bool irdma_compare_mgs(struct irdma_mcast_grp_ctx_entry_info *entry1, + struct irdma_mcast_grp_ctx_entry_info *entry2) +{ + if (entry1->dest_port == entry2->dest_port && + entry1->qp_id == entry2->qp_id) + return true; + + return false; +} + +/** + * irdma_sc_add_mcast_grp - Allocates mcast group entry in ctx + * @ctx: Multcast group context + * @mg: Multcast group info + */ +static enum irdma_status_code +irdma_sc_add_mcast_grp(struct irdma_mcast_grp_info *ctx, + struct irdma_mcast_grp_ctx_entry_info *mg) +{ + u32 idx; + bool free_entry_found = false; + u32 free_entry_idx = 0; + + /* find either an identical or a free entry for a multicast group */ + for (idx = 0; idx < IRDMA_MAX_MGS_PER_CTX; idx++) { + if (ctx->mg_ctx_info[idx].valid_entry) { + if (irdma_compare_mgs(&ctx->mg_ctx_info[idx], mg)) { + ctx->mg_ctx_info[idx].use_cnt++; + return 0; + } + continue; + } + if (!free_entry_found) { + free_entry_found = true; + free_entry_idx = idx; + } + } + + if (free_entry_found) { + ctx->mg_ctx_info[free_entry_idx] = *mg; + ctx->mg_ctx_info[free_entry_idx].valid_entry = true; + ctx->mg_ctx_info[free_entry_idx].use_cnt = 1; + ctx->no_of_mgs++; + return 0; + } + + return IRDMA_ERR_NO_MEMORY; +} + +/** + * irdma_sc_del_mcast_grp - Delete mcast group + * @ctx: Multcast group context + * @mg: Multcast group info + * + * Finds and removes a specific mulicast group from context, all + * parameters must match to remove a multicast group. + */ +static enum irdma_status_code +irdma_sc_del_mcast_grp(struct irdma_mcast_grp_info *ctx, + struct irdma_mcast_grp_ctx_entry_info *mg) +{ + u32 idx; + + /* find an entry in multicast group context */ + for (idx = 0; idx < IRDMA_MAX_MGS_PER_CTX; idx++) { + if (!ctx->mg_ctx_info[idx].valid_entry) + continue; + + if (irdma_compare_mgs(mg, &ctx->mg_ctx_info[idx])) { + ctx->mg_ctx_info[idx].use_cnt--; + + if (!ctx->mg_ctx_info[idx].use_cnt) { + ctx->mg_ctx_info[idx].valid_entry = false; + ctx->no_of_mgs--; + /* Remove gap if element was not the last */ + if (idx != ctx->no_of_mgs && + ctx->no_of_mgs > 0) { + memcpy(&ctx->mg_ctx_info[idx], + &ctx->mg_ctx_info[ctx->no_of_mgs - 1], + sizeof(ctx->mg_ctx_info[idx])); + ctx->mg_ctx_info[ctx->no_of_mgs - 1].valid_entry = false; + } + } + + return 0; + } + } + + return IRDMA_ERR_PARAM; +} + +struct irdma_uda_ops irdma_uda_ops = { + .create_ah = irdma_sc_create_ah, + .destroy_ah = irdma_sc_destroy_ah, + .init_ah = irdma_sc_init_ah, + .mcast_grp_add = irdma_sc_add_mcast_grp, + .mcast_grp_create = irdma_sc_create_mcast_grp, + .mcast_grp_del = irdma_sc_del_mcast_grp, + .mcast_grp_destroy = irdma_sc_destroy_mcast_grp, + .mcast_grp_modify = irdma_sc_modify_mcast_grp, + .modify_ah = irdma_sc_modify_ah, +}; diff --git a/drivers/infiniband/hw/irdma/uda.h b/drivers/infiniband/hw/irdma/uda.h new file mode 100644 index 000000000000..71c399048a57 --- /dev/null +++ b/drivers/infiniband/hw/irdma/uda.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#ifndef IRDMA_UDA_H +#define IRDMA_UDA_H + +extern struct irdma_uda_ops irdma_uda_ops; + +#define IRDMA_UDA_MAX_FSI_MGS 4096 +#define IRDMA_UDA_MAX_PFS 16 +#define IRDMA_UDA_MAX_VFS 128 + +struct irdma_sc_cqp; + +struct irdma_ah_info { + struct irdma_sc_ah *ah; + struct irdma_sc_vsi *vsi; + u32 pd_idx; + u32 dst_arpindex; + u32 dest_ip_addr[4]; + u32 src_ip_addr[4]; + u32 flow_label; + u32 ah_idx; + u16 vlan_tag; + u8 insert_vlan_tag; + u8 tc_tos; + u8 hop_ttl; + u8 mac_addr[ETH_ALEN]; + bool ah_valid:1; + bool ipv4_valid:1; + bool do_lpbk:1; +}; + +struct irdma_sc_ah { + struct irdma_sc_dev *dev; + struct irdma_ah_info ah_info; +}; + +struct irdma_uda_ops { + void (*init_ah)(struct irdma_sc_dev *dev, struct irdma_sc_ah *ah); + enum irdma_status_code (*create_ah)(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch); + enum irdma_status_code (*modify_ah)(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch); + enum irdma_status_code (*destroy_ah)(struct irdma_sc_cqp *cqp, + struct irdma_ah_info *info, + u64 scratch); + /* multicast */ + enum irdma_status_code (*mcast_grp_create)(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, + u64 scratch); + enum irdma_status_code (*mcast_grp_modify)(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, + u64 scratch); + enum irdma_status_code (*mcast_grp_destroy)(struct irdma_sc_cqp *cqp, + struct irdma_mcast_grp_info *info, + u64 scratch); + enum irdma_status_code (*mcast_grp_add)(struct irdma_mcast_grp_info *ctx, + struct irdma_mcast_grp_ctx_entry_info *mg); + enum irdma_status_code (*mcast_grp_del)(struct irdma_mcast_grp_info *ctx, + struct irdma_mcast_grp_ctx_entry_info *mg); +}; +#endif /* IRDMA_UDA_H */ diff --git a/drivers/infiniband/hw/irdma/uda_d.h b/drivers/infiniband/hw/irdma/uda_d.h new file mode 100644 index 000000000000..266e9ed567c0 --- /dev/null +++ b/drivers/infiniband/hw/irdma/uda_d.h @@ -0,0 +1,382 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#ifndef IRDMA_UDA_D_H +#define IRDMA_UDA_D_H + +/* L4 packet type */ +#define IRDMA_E_UDA_SQ_L4T_UNKNOWN 0 +#define IRDMA_E_UDA_SQ_L4T_TCP 1 +#define IRDMA_E_UDA_SQ_L4T_SCTP 2 +#define IRDMA_E_UDA_SQ_L4T_UDP 3 + +/* Inner IP header type */ +#define IRDMA_E_UDA_SQ_IIPT_UNKNOWN 0 +#define IRDMA_E_UDA_SQ_IIPT_IPV6 1 +#define IRDMA_E_UDA_SQ_IIPT_IPV4_NO_CSUM 2 +#define IRDMA_E_UDA_SQ_IIPT_IPV4_CSUM 3 + +/* UDA defined fields for transmit descriptors */ +#define IRDMA_UDA_QPSQ_PUSHWQE_S 56 +#define IRDMA_UDA_QPSQ_PUSHWQE_M BIT_ULL(IRDMA_UDA_QPSQ_PUSHWQE_S) + +#define IRDMA_UDA_QPSQ_INLINEDATAFLAG_S 57 +#define IRDMA_UDA_QPSQ_INLINEDATAFLAG_M \ + BIT_ULL(IRDMA_UDA_QPSQ_INLINEDATAFLAG_S) + +#define IRDMA_UDA_QPSQ_INLINEDATALEN_S 48 +#define IRDMA_UDA_QPSQ_INLINEDATALEN_M \ + ((u64)0xff << IRDMA_UDA_QPSQ_INLINEDATALEN_S) + +#define IRDMA_UDA_QPSQ_ADDFRAGCNT_S 38 +#define IRDMA_UDA_QPSQ_ADDFRAGCNT_M \ + ((u64)0x0F << IRDMA_UDA_QPSQ_ADDFRAGCNT_S) + +#define IRDMA_UDA_QPSQ_IPFRAGFLAGS_S 42 +#define IRDMA_UDA_QPSQ_IPFRAGFLAGS_M \ + ((u64)0x3 << IRDMA_UDA_QPSQ_IPFRAGFLAGS_S) + +#define IRDMA_UDA_QPSQ_NOCHECKSUM_S 45 +#define IRDMA_UDA_QPSQ_NOCHECKSUM_M \ + BIT_ULL(IRDMA_UDA_QPSQ_NOCHECKSUM_S) + +#define IRDMA_UDA_QPSQ_AHIDXVALID_S 46 +#define IRDMA_UDA_QPSQ_AHIDXVALID_M \ + BIT_ULL(IRDMA_UDA_QPSQ_AHIDXVALID_S) + +#define IRDMA_UDA_QPSQ_LOCAL_FENCE_S 61 +#define IRDMA_UDA_QPSQ_LOCAL_FENCE_M \ + BIT_ULL(IRDMA_UDA_QPSQ_LOCAL_FENCE_S) + +#define IRDMA_UDA_QPSQ_AHIDX_S 0 +#define IRDMA_UDA_QPSQ_AHIDX_M ((u64)0x1ffff << IRDMA_UDA_QPSQ_AHIDX_S) + +#define IRDMA_UDA_QPSQ_PROTOCOL_S 16 +#define IRDMA_UDA_QPSQ_PROTOCOL_M \ + ((u64)0xff << IRDMA_UDA_QPSQ_PROTOCOL_S) + +#define IRDMA_UDA_QPSQ_EXTHDRLEN_S 32 +#define IRDMA_UDA_QPSQ_EXTHDRLEN_M \ + ((u64)0x1ff << IRDMA_UDA_QPSQ_EXTHDRLEN_S) + +#define IRDMA_UDA_QPSQ_MULTICAST_S 63 +#define IRDMA_UDA_QPSQ_MULTICAST_M \ + BIT_ULL(IRDMA_UDA_QPSQ_MULTICAST_S) + +#define IRDMA_UDA_QPSQ_MACLEN_S 56 +#define IRDMA_UDA_QPSQ_MACLEN_M \ + ((u64)0x7f << IRDMA_UDA_QPSQ_MACLEN_S) +#define IRDMA_UDA_QPSQ_MACLEN_LINE 2 + +#define IRDMA_UDA_QPSQ_IPLEN_S 48 +#define IRDMA_UDA_QPSQ_IPLEN_M \ + ((u64)0x7f << IRDMA_UDA_QPSQ_IPLEN_S) +#define IRDMA_UDA_QPSQ_IPLEN_LINE 2 + +#define IRDMA_UDA_QPSQ_L4T_S 30 +#define IRDMA_UDA_QPSQ_L4T_M ((u64)0x3 << IRDMA_UDA_QPSQ_L4T_S) +#define IRDMA_UDA_QPSQ_L4T_LINE 2 + +#define IRDMA_UDA_QPSQ_IIPT_S 28 +#define IRDMA_UDA_QPSQ_IIPT_M ((u64)0x3 << IRDMA_UDA_QPSQ_IIPT_S) +#define IRDMA_UDA_QPSQ_IIPT_LINE 2 + +#define IRDMA_UDA_QPSQ_DO_LPB_LINE 3 + +#define IRDMA_UDA_QPSQ_FWD_PROG_CONFIRM_S 45 +#define IRDMA_UDA_QPSQ_FWD_PROG_CONFIRM_M \ + BIT_ULL(IRDMA_UDA_QPSQ_FWD_PROG_CONFIRM_S) +#define IRDMA_UDA_QPSQ_FWD_PROG_CONFIRM_LINE 3 + +#define IRDMA_UDA_QPSQ_IMMDATA_S 0 +#define IRDMA_UDA_QPSQ_IMMDATA_M \ + ((u64)0xffffffffffffffff << IRDMA_UDA_QPSQ_IMMDATA_S) + +/* Byte Offset 0 */ +#define IRDMA_UDAQPC_IPV4_S 3 +#define IRDMA_UDAQPC_IPV4_M BIT_ULL(IRDMAQPC_IPV4_S) + +#define IRDMA_UDAQPC_INSERTVLANTAG_S 5 +#define IRDMA_UDAQPC_INSERTVLANTAG_M BIT_ULL(IRDMA_UDAQPC_INSERTVLANTAG_S) + +#define IRDMA_UDAQPC_ISQP1_S 6 +#define IRDMA_UDAQPC_ISQP1_M BIT_ULL(IRDMA_UDAQPC_ISQP1_S) + +#define IRDMA_UDAQPC_RQWQESIZE_S IRDMAQPC_RQWQESIZE_S +#define IRDMA_UDAQPC_RQWQESIZE_M IRDMAQPC_RQWQESIZE_M + +#define IRDMA_UDAQPC_ECNENABLE_S 14 +#define IRDMA_UDAQPC_ECNENABLE_M BIT_ULL(IRDMA_UDAQPC_ECNENABLE_S) + +#define IRDMA_UDAQPC_PDINDEXHI_S 20 +#define IRDMA_UDAQPC_PDINDEXHI_M ((u64)3 << IRDMA_UDAQPC_PDINDEXHI_S) + +#define IRDMA_UDAQPC_DCTCPENABLE_S 25 +#define IRDMA_UDAQPC_DCTCPENABLE_M BIT_ULL(IRDMA_UDAQPC_DCTCPENABLE_S) + +#define IRDMA_UDAQPC_RCVTPHEN_S IRDMAQPC_RCVTPHEN_S +#define IRDMA_UDAQPC_RCVTPHEN_M IRDMAQPC_RCVTPHEN_M + +#define IRDMA_UDAQPC_XMITTPHEN_S IRDMAQPC_XMITTPHEN_S +#define IRDMA_UDAQPC_XMITTPHEN_M IRDMAQPC_XMITTPHEN_M + +#define IRDMA_UDAQPC_RQTPHEN_S IRDMAQPC_RQTPHEN_S +#define IRDMA_UDAQPC_RQTPHEN_M IRDMAQPC_RQTPHEN_M + +#define IRDMA_UDAQPC_SQTPHEN_S IRDMAQPC_SQTPHEN_S +#define IRDMA_UDAQPC_SQTPHEN_M IRDMAQPC_SQTPHEN_M + +#define IRDMA_UDAQPC_PPIDX_S IRDMAQPC_PPIDX_S +#define IRDMA_UDAQPC_PPIDX_M IRDMAQPC_PPIDX_M + +#define IRDMA_UDAQPC_PMENA_S IRDMAQPC_PMENA_S +#define IRDMA_UDAQPC_PMENA_M IRDMAQPC_PMENA_M + +#define IRDMA_UDAQPC_INSERTTAG2_S 11 +#define IRDMA_UDAQPC_INSERTTAG2_M BIT_ULL(IRDMA_UDAQPC_INSERTTAG2_S) + +#define IRDMA_UDAQPC_INSERTTAG3_S 14 +#define IRDMA_UDAQPC_INSERTTAG3_M BIT_ULL(IRDMA_UDAQPC_INSERTTAG3_S) + +#define IRDMA_UDAQPC_RQSIZE_S IRDMAQPC_RQSIZE_S +#define IRDMA_UDAQPC_RQSIZE_M IRDMAQPC_RQSIZE_M + +#define IRDMA_UDAQPC_SQSIZE_S IRDMAQPC_SQSIZE_S +#define IRDMA_UDAQPC_SQSIZE_M IRDMAQPC_SQSIZE_M + +#define IRDMA_UDAQPC_TXCQNUM_S IRDMAQPC_TXCQNUM_S +#define IRDMA_UDAQPC_TXCQNUM_M IRDMAQPC_TXCQNUM_M + +#define IRDMA_UDAQPC_RXCQNUM_S IRDMAQPC_RXCQNUM_S +#define IRDMA_UDAQPC_RXCQNUM_M IRDMAQPC_RXCQNUM_M + +#define IRDMA_UDAQPC_QPCOMPCTX_S IRDMAQPC_QPCOMPCTX_S +#define IRDMA_UDAQPC_QPCOMPCTX_M IRDMAQPC_QPCOMPCTX_M + +#define IRDMA_UDAQPC_SQTPHVAL_S IRDMAQPC_SQTPHVAL_S +#define IRDMA_UDAQPC_SQTPHVAL_M IRDMAQPC_SQTPHVAL_M + +#define IRDMA_UDAQPC_RQTPHVAL_S IRDMAQPC_RQTPHVAL_S +#define IRDMA_UDAQPC_RQTPHVAL_M IRDMAQPC_RQTPHVAL_M + +#define IRDMA_UDAQPC_QSHANDLE_S IRDMAQPC_QSHANDLE_S +#define IRDMA_UDAQPC_QSHANDLE_M IRDMAQPC_QSHANDLE_M + +#define IRDMA_UDAQPC_RQHDRRINGBUFSIZE_S 48 +#define IRDMA_UDAQPC_RQHDRRINGBUFSIZE_M \ + ((u64)0x3 << IRDMA_UDAQPC_RQHDRRINGBUFSIZE_S) + +#define IRDMA_UDAQPC_SQHDRRINGBUFSIZE_S 32 +#define IRDMA_UDAQPC_SQHDRRINGBUFSIZE_M \ + ((u64)0x3 << IRDMA_UDAQPC_SQHDRRINGBUFSIZE_S) + +#define IRDMA_UDAQPC_PRIVILEGEENABLE_S 25 +#define IRDMA_UDAQPC_PRIVILEGEENABLE_M \ + BIT_ULL(IRDMA_UDAQPC_PRIVILEGEENABLE_S) + +#define IRDMA_UDAQPC_USE_STATISTICS_INSTANCE_S 26 +#define IRDMA_UDAQPC_USE_STATISTICS_INSTANCE_M \ + BIT_ULL(IRDMA_UDAQPC_USE_STATISTICS_INSTANCE_S) + +#define IRDMA_UDAQPC_STATISTICS_INSTANCE_INDEX_S 0 +#define IRDMA_UDAQPC_STATISTICS_INSTANCE_INDEX_M \ + ((u64)0x7F << IRDMA_UDAQPC_STATISTICS_INSTANCE_INDEX_S) + +#define IRDMA_UDAQPC_PRIVHDRGENENABLE_S 0 +#define IRDMA_UDAQPC_PRIVHDRGENENABLE_M \ + BIT_ULL(IRDMA_UDAQPC_PRIVHDRGENENABLE_S) + +#define IRDMA_UDAQPC_RQHDRSPLITENABLE_S 3 +#define IRDMA_UDAQPC_RQHDRSPLITENABLE_M \ + BIT_ULL(IRDMA_UDAQPC_RQHDRSPLITENABLE_S) + +#define IRDMA_UDAQPC_RQHDRRINGBUFENABLE_S 2 +#define IRDMA_UDAQPC_RQHDRRINGBUFENABLE_M \ + BIT_ULL(IRDMA_UDAQPC_RQHDRRINGBUFENABLE_S) + +#define IRDMA_UDAQPC_SQHDRRINGBUFENABLE_S 1 +#define IRDMA_UDAQPC_SQHDRRINGBUFENABLE_M \ + BIT_ULL(IRDMA_UDAQPC_SQHDRRINGBUFENABLE_S) + +#define IRDMA_UDAQPC_IPID_S 32 +#define IRDMA_UDAQPC_IPID_M ((u64)0xffff << IRDMA_UDAQPC_IPID_S) + +#define IRDMA_UDAQPC_SNDMSS_S 16 +#define IRDMA_UDAQPC_SNDMSS_M ((u64)0x3fff << IRDMA_UDAQPC_SNDMSS_S) + +#define IRDMA_UDAQPC_VLANTAG_S 0 +#define IRDMA_UDAQPC_VLANTAG_M ((u64)0xffff << IRDMA_UDAQPC_VLANTAG_S) + +/* Address Handle */ +#define IRDMA_UDA_CQPSQ_MAV_PDINDEXHI_S 20 +#define IRDMA_UDA_CQPSQ_MAV_PDINDEXHI_M \ + ((u64)0x3 << IRDMA_UDA_CQPSQ_MAV_PDINDEXHI_S) + +#define IRDMA_UDA_CQPSQ_MAV_PDINDEXLO_S 48 +#define IRDMA_UDA_CQPSQ_MAV_PDINDEXLO_M \ + ((u64)0xffff << IRDMA_UDA_CQPSQ_MAV_PDINDEXLO_S) + +#define IRDMA_UDA_CQPSQ_MAV_SRCMACADDRINDEX_S 24 +#define IRDMA_UDA_CQPSQ_MAV_SRCMACADDRINDEX_M \ + ((u64)0x3f << IRDMA_UDA_CQPSQ_MAV_SRCMACADDRINDEX_S) + +#define IRDMA_UDA_CQPSQ_MAV_ARPINDEX_S 48 +#define IRDMA_UDA_CQPSQ_MAV_ARPINDEX_M \ + ((u64)0xffff << IRDMA_UDA_CQPSQ_MAV_ARPINDEX_S) + +#define IRDMA_UDA_CQPSQ_MAV_TC_S 32 +#define IRDMA_UDA_CQPSQ_MAV_TC_M ((u64)0xff << IRDMA_UDA_CQPSQ_MAV_TC_S) + +#define IRDMA_UDA_CQPSQ_MAV_HOPLIMIT_S 32 +#define IRDMA_UDA_CQPSQ_MAV_HOPLIMIT_M \ + ((u64)0xff << IRDMA_UDA_CQPSQ_MAV_HOPLIMIT_S) + +#define IRDMA_UDA_CQPSQ_MAV_FLOWLABEL_S 0 +#define IRDMA_UDA_CQPSQ_MAV_FLOWLABEL_M \ + ((u64)0xfffff << IRDMA_UDA_CQPSQ_MAV_FLOWLABEL_S) + +#define IRDMA_UDA_CQPSQ_MAV_ADDR0_S 32 +#define IRDMA_UDA_CQPSQ_MAV_ADDR0_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_MAV_ADDR0_S) + +#define IRDMA_UDA_CQPSQ_MAV_ADDR1_S 0 +#define IRDMA_UDA_CQPSQ_MAV_ADDR1_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_MAV_ADDR1_S) + +#define IRDMA_UDA_CQPSQ_MAV_ADDR2_S 32 +#define IRDMA_UDA_CQPSQ_MAV_ADDR2_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_MAV_ADDR2_S) + +#define IRDMA_UDA_CQPSQ_MAV_ADDR3_S 0 +#define IRDMA_UDA_CQPSQ_MAV_ADDR3_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_MAV_ADDR3_S) + +#define IRDMA_UDA_CQPSQ_MAV_WQEVALID_S 63 +#define IRDMA_UDA_CQPSQ_MAV_WQEVALID_M \ + BIT_ULL(IRDMA_UDA_CQPSQ_MAV_WQEVALID_S) + +#define IRDMA_UDA_CQPSQ_MAV_OPCODE_S 32 +#define IRDMA_UDA_CQPSQ_MAV_OPCODE_M \ + ((u64)0x3f << IRDMA_UDA_CQPSQ_MAV_OPCODE_S) + +#define IRDMA_UDA_CQPSQ_MAV_DOLOOPBACKK_S 62 +#define IRDMA_UDA_CQPSQ_MAV_DOLOOPBACKK_M \ + BIT_ULL(IRDMA_UDA_CQPSQ_MAV_DOLOOPBACKK_S) + +#define IRDMA_UDA_CQPSQ_MAV_IPV4VALID_S 59 +#define IRDMA_UDA_CQPSQ_MAV_IPV4VALID_M \ + BIT_ULL(IRDMA_UDA_CQPSQ_MAV_IPV4VALID_S) + +#define IRDMA_UDA_CQPSQ_MAV_AVIDX_S 0 +#define IRDMA_UDA_CQPSQ_MAV_AVIDX_M \ + ((u64)0x1ffff << IRDMA_UDA_CQPSQ_MAV_AVIDX_S) + +#define IRDMA_UDA_CQPSQ_MAV_INSERTVLANTAG_S 60 +#define IRDMA_UDA_CQPSQ_MAV_INSERTVLANTAG_M BIT_ULL(IRDMA_UDA_CQPSQ_MAV_INSERTVLANTAG_S) + +/* UDA multicast group */ + +#define IRDMA_UDA_MGCTX_VFFLAG_S 29 +#define IRDMA_UDA_MGCTX_VFFLAG_M BIT_ULL(IRDMA_UDA_MGCTX_VFFLAG_S) + +#define IRDMA_UDA_MGCTX_DESTPORT_S 32 +#define IRDMA_UDA_MGCTX_DESTPORT_M ((u64)0xffff << IRDMA_UDA_MGCTX_DESTPORT_S) + +#define IRDMA_UDA_MGCTX_VFID_S 22 +#define IRDMA_UDA_MGCTX_VFID_M ((u64)0x7f << IRDMA_UDA_MGCTX_VFID_S) + +#define IRDMA_UDA_MGCTX_VALIDENT_S 31 +#define IRDMA_UDA_MGCTX_VALIDENT_M BIT_ULL(IRDMA_UDA_MGCTX_VALIDENT_S) + +#define IRDMA_UDA_MGCTX_PFID_S 18 +#define IRDMA_UDA_MGCTX_PFID_M ((u64)0xf << IRDMA_UDA_MGCTX_PFID_S) + +#define IRDMA_UDA_MGCTX_FLAGIGNOREDPORT_S 30 +#define IRDMA_UDA_MGCTX_FLAGIGNOREDPORT_M \ + BIT_ULL(IRDMA_UDA_MGCTX_FLAGIGNOREDPORT_S) + +#define IRDMA_UDA_MGCTX_QPID_S 0 +#define IRDMA_UDA_MGCTX_QPID_M ((u64)0x3ffff << IRDMA_UDA_MGCTX_QPID_S) + +/* multicast group create CQP command */ + +#define IRDMA_UDA_CQPSQ_MG_WQEVALID_S 63 +#define IRDMA_UDA_CQPSQ_MG_WQEVALID_M \ + BIT_ULL(IRDMA_UDA_CQPSQ_MG_WQEVALID_S) + +#define IRDMA_UDA_CQPSQ_MG_OPCODE_S 32 +#define IRDMA_UDA_CQPSQ_MG_OPCODE_M ((u64)0x3f << IRDMA_UDA_CQPSQ_MG_OPCODE_S) + +#define IRDMA_UDA_CQPSQ_MG_MGIDX_S 0 +#define IRDMA_UDA_CQPSQ_MG_MGIDX_M ((u64)0x1fff << IRDMA_UDA_CQPSQ_MG_MGIDX_S) + +#define IRDMA_UDA_CQPSQ_MG_IPV4VALID_S 60 +#define IRDMA_UDA_CQPSQ_MG_IPV4VALID_M BIT_ULL(IRDMA_UDA_CQPSQ_MG_IPV4VALID_S) + +#define IRDMA_UDA_CQPSQ_MG_VLANVALID_S 59 +#define IRDMA_UDA_CQPSQ_MG_VLANVALID_M BIT_ULL(IRDMA_UDA_CQPSQ_MG_VLANVALID_S) + +#define IRDMA_UDA_CQPSQ_MG_HMC_FCN_ID_S 0 +#define IRDMA_UDA_CQPSQ_MG_HMC_FCN_ID_M ((u64)0x3F << IRDMA_UDA_CQPSQ_MG_HMC_FCN_ID_S) + +#define IRDMA_UDA_CQPSQ_MG_VLANID_S 32 +#define IRDMA_UDA_CQPSQ_MG_VLANID_M ((u64)0xFFF << IRDMA_UDA_CQPSQ_MG_VLANID_S) + +#define IRDMA_UDA_CQPSQ_QS_HANDLE_S 0 +#define IRDMA_UDA_CQPSQ_QS_HANDLE_M ((u64)0x3FF << IRDMA_UDA_CQPSQ_QS_HANDLE_S) + +/* Quad hash table */ +#define IRDMA_UDA_CQPSQ_QHASH_QPN_S 32 +#define IRDMA_UDA_CQPSQ_QHASH_QPN_M \ + ((u64)0x3ffff << IRDMA_UDA_CQPSQ_QHASH_QPN_S) + +#define IRDMA_UDA_CQPSQ_QHASH__S 0 +#define IRDMA_UDA_CQPSQ_QHASH__M BIT_ULL(IRDMA_UDA_CQPSQ_QHASH__S) + +#define IRDMA_UDA_CQPSQ_QHASH_SRC_PORT_S 16 +#define IRDMA_UDA_CQPSQ_QHASH_SRC_PORT_M \ + ((u64)0xffff << IRDMA_UDA_CQPSQ_QHASH_SRC_PORT_S) + +#define IRDMA_UDA_CQPSQ_QHASH_DEST_PORT_S 0 +#define IRDMA_UDA_CQPSQ_QHASH_DEST_PORT_M \ + ((u64)0xffff << IRDMA_UDA_CQPSQ_QHASH_DEST_PORT_S) + +#define IRDMA_UDA_CQPSQ_QHASH_ADDR0_S 32 +#define IRDMA_UDA_CQPSQ_QHASH_ADDR0_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_QHASH_ADDR0_S) + +#define IRDMA_UDA_CQPSQ_QHASH_ADDR1_S 0 +#define IRDMA_UDA_CQPSQ_QHASH_ADDR1_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_QHASH_ADDR1_S) + +#define IRDMA_UDA_CQPSQ_QHASH_ADDR2_S 32 +#define IRDMA_UDA_CQPSQ_QHASH_ADDR2_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_QHASH_ADDR2_S) + +#define IRDMA_UDA_CQPSQ_QHASH_ADDR3_S 0 +#define IRDMA_UDA_CQPSQ_QHASH_ADDR3_M \ + ((u64)0xffffffff << IRDMA_UDA_CQPSQ_QHASH_ADDR3_S) + +#define IRDMA_UDA_CQPSQ_QHASH_WQEVALID_S 63 +#define IRDMA_UDA_CQPSQ_QHASH_WQEVALID_M \ + BIT_ULL(IRDMA_UDA_CQPSQ_QHASH_WQEVALID_S) + +#define IRDMA_UDA_CQPSQ_QHASH_OPCODE_S 32 +#define IRDMA_UDA_CQPSQ_QHASH_OPCODE_M \ + ((u64)0x3f << IRDMA_UDA_CQPSQ_QHASH_OPCODE_S) + +#define IRDMA_UDA_CQPSQ_QHASH_MANAGE_S 61 +#define IRDMA_UDA_CQPSQ_QHASH_MANAGE_M \ + ((u64)0x3 << IRDMA_UDA_CQPSQ_QHASH_MANAGE_S) + +#define IRDMA_UDA_CQPSQ_QHASH_IPV4VALID_S 60 +#define IRDMA_UDA_CQPSQ_QHASH_IPV4VALID_M \ + ((u64)0x1 << IRDMA_UDA_CQPSQ_QHASH_IPV4VALID_S) + +#define IRDMA_UDA_CQPSQ_QHASH_LANFWD_S 59 +#define IRDMA_UDA_CQPSQ_QHASH_LANFWD_M \ + ((u64)0x1 << IRDMA_UDA_CQPSQ_QHASH_LANFWD_S) + +#define IRDMA_UDA_CQPSQ_QHASH_ENTRYTYPE_S 42 +#define IRDMA_UDA_CQPSQ_QHASH_ENTRYTYPE_M \ + ((u64)0x7 << IRDMA_UDA_CQPSQ_QHASH_ENTRYTYPE_S) +#endif /* IRDMA_UDA_D_H */ From patchwork Fri Apr 17 17:12:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F095FC38A2B for ; Fri, 17 Apr 2020 17:21:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7A992220A for ; Fri, 17 Apr 2020 17:21:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729689AbgDQRV3 (ORCPT ); Fri, 17 Apr 2020 13:21:29 -0400 Received: from mga01.intel.com ([192.55.52.88]:30739 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728088AbgDQRV2 (ORCPT ); Fri, 17 Apr 2020 13:21:28 -0400 IronPort-SDR: zxE7/LEvamOavczAwOVpFSfDKnUYmhAfVrf7Rnib/In7loT15oB+UeOpeShmlBlcCqXVh+esUb QkPxCUym2Ugg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:57 -0700 IronPort-SDR: kI5SgG4r9/s6YtqY8n6Q4O2CH2+H2LbGdhGRCpXO4tiTcmaNr0r/EQ4uRS/IXqhQa9mirO4RWV TNrU2M+rTnMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383740" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:57 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 11/16] RDMA/irdma: Add user/kernel shared libraries Date: Fri, 17 Apr 2020 10:12:46 -0700 Message-Id: <20200417171251.1533371-12-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Building the WQE descriptors for different verb operations are similar in kernel and user-space. Add these shared libraries. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/uk.c | 1744 ++++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/user.h | 448 +++++++ 2 files changed, 2192 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/uk.c create mode 100644 drivers/infiniband/hw/irdma/user.h diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c new file mode 100644 index 000000000000..18c72be7e73b --- /dev/null +++ b/drivers/infiniband/hw/irdma/uk.c @@ -0,0 +1,1744 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#include "osdep.h" +#include "status.h" +#include "defs.h" +#include "user.h" +#include "irdma.h" + +/** + * irdma_set_fragment - set fragment in wqe + * @wqe: wqe for setting fragment + * @offset: offset value + * @sge: sge length and stag + * @valid: The wqe valid + */ +static void irdma_set_fragment(__le64 *wqe, u32 offset, struct irdma_sge *sge, + u8 valid) +{ + if (sge) { + set_64bit_val(wqe, offset, + LS_64(sge->tag_off, IRDMAQPSQ_FRAG_TO)); + set_64bit_val(wqe, offset + 8, + LS_64(valid, IRDMAQPSQ_VALID) | + LS_64(sge->len, IRDMAQPSQ_FRAG_LEN) | + LS_64(sge->stag, IRDMAQPSQ_FRAG_STAG)); + } else { + set_64bit_val(wqe, offset, 0); + set_64bit_val(wqe, offset + 8, + LS_64(valid, IRDMAQPSQ_VALID)); + } +} + +/** + * irdma_set_fragment_gen_1 - set fragment in wqe + * @wqe: wqe for setting fragment + * @offset: offset value + * @sge: sge length and stag + * @valid: wqe valid flag + */ +static void irdma_set_fragment_gen_1(__le64 *wqe, u32 offset, + struct irdma_sge *sge, u8 valid) +{ + if (sge) { + set_64bit_val(wqe, offset, + LS_64(sge->tag_off, IRDMAQPSQ_FRAG_TO)); + set_64bit_val(wqe, offset + 8, + LS_64(sge->len, IRDMAQPSQ_GEN1_FRAG_LEN) | + LS_64(sge->stag, IRDMAQPSQ_GEN1_FRAG_STAG)); + } else { + set_64bit_val(wqe, offset, 0); + set_64bit_val(wqe, offset + 8, 0); + } +} + +/** + * irdma_nop_1 - insert a NOP wqe + * @qp: hw qp ptr + */ +static enum irdma_status_code irdma_nop_1(struct irdma_qp_uk *qp) +{ + u64 hdr; + __le64 *wqe; + u32 wqe_idx; + bool signaled = false; + + if (!qp->sq_ring.head) + return IRDMA_ERR_PARAM; + + wqe_idx = IRDMA_RING_CURRENT_HEAD(qp->sq_ring); + wqe = qp->sq_base[wqe_idx].elem; + + qp->sq_wrtrk_array[wqe_idx].quanta = IRDMA_QP_WQE_MIN_QUANTA; + + set_64bit_val(wqe, 0, 0); + set_64bit_val(wqe, 8, 0); + set_64bit_val(wqe, 16, 0); + + hdr = LS_64(IRDMAQP_OP_NOP, IRDMAQPSQ_OPCODE) | + LS_64(signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + /* make sure WQE is written before valid bit is set */ + dma_wmb(); + + set_64bit_val(wqe, 24, hdr); + + return 0; +} + +/** + * irdma_clear_wqes - clear next 128 sq entries + * @qp: hw qp ptr + * @qp_wqe_idx: wqe_idx + */ +void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx) +{ + u64 wqe_addr; + u32 wqe_idx; + + if (!(qp_wqe_idx & 0x7F)) { + wqe_idx = (qp_wqe_idx + 128) % qp->sq_ring.size; + wqe_addr = (u64)qp->sq_base->elem + IRDMA_WQE_SIZE_32 * wqe_idx; + + if (wqe_idx) + memset((void *)wqe_addr, qp->swqe_polarity ? 0 : 0xFF, 0x1000); + else + memset((void *)wqe_addr, qp->swqe_polarity ? 0xFF : 0, 0x1000); + } +} + +/** + * irdma_qp_post_wr - ring doorbell + * @qp: hw qp ptr + */ +void irdma_qp_post_wr(struct irdma_qp_uk *qp) +{ + u64 temp; + u32 hw_sq_tail; + u32 sw_sq_head; + + /* valid bit is written and loads completed before reading shadow */ + mb(); + + /* read the doorbell shadow area */ + get_64bit_val(qp->shadow_area, 0, &temp); + + hw_sq_tail = (u32)RS_64(temp, IRDMA_QP_DBSA_HW_SQ_TAIL); + sw_sq_head = IRDMA_RING_CURRENT_HEAD(qp->sq_ring); + if (sw_sq_head != qp->initial_ring.head) { + if (qp->push_mode) { + writel(qp->qp_id, qp->wqe_alloc_db); + qp->push_mode = false; + } else if (sw_sq_head != hw_sq_tail) { + if (sw_sq_head > qp->initial_ring.head) { + if (hw_sq_tail >= qp->initial_ring.head && + hw_sq_tail < sw_sq_head) + writel(qp->qp_id, qp->wqe_alloc_db); + } else { + if (hw_sq_tail >= qp->initial_ring.head || + hw_sq_tail < sw_sq_head) + writel(qp->qp_id, qp->wqe_alloc_db); + } + } + } + + qp->initial_ring.head = qp->sq_ring.head; +} + +/** + * irdma_qp_ring_push_db - ring qp doorbell + * @qp: hw qp ptr + * @wqe_idx: wqe index + */ +static void irdma_qp_ring_push_db(struct irdma_qp_uk *qp, u32 wqe_idx) +{ + set_32bit_val(qp->push_db, 0, + LS_32(wqe_idx >> 3, IRDMA_WQEALLOC_WQE_DESC_INDEX) | qp->qp_id); + qp->initial_ring.head = qp->sq_ring.head; + qp->push_mode = true; +} + +void irdma_qp_push_wqe(struct irdma_qp_uk *qp, __le64 *wqe, u16 quanta, + u32 wqe_idx, bool post_sq) +{ + __le64 *push; + + if (IRDMA_RING_CURRENT_HEAD(qp->initial_ring) != + IRDMA_RING_CURRENT_TAIL(qp->sq_ring) && + !(qp->push_mode)) { + if (post_sq) + irdma_qp_post_wr(qp); + } else { + push = (__le64 *)((uintptr_t)qp->push_wqe + + (wqe_idx & 0x7) * 0x20); + memcpy(push, wqe, quanta * IRDMA_QP_WQE_MIN_SIZE); + irdma_qp_ring_push_db(qp, wqe_idx); + } +} + +/** + * irdma_qp_get_next_send_wqe - pad with NOP if needed, return where next WR should go + * @qp: hw qp ptr + * @wqe_idx: return wqe index + * @quanta: size of WR in quanta + * @total_size: size of WR in bytes + * @info: info on WR + */ +__le64 *irdma_qp_get_next_send_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx, + u16 quanta, u32 total_size, + struct irdma_post_sq_info *info) +{ + __le64 *wqe; + __le64 *wqe_0 = NULL; + u16 nop_cnt; + u16 i; + + nop_cnt = IRDMA_RING_CURRENT_HEAD(qp->sq_ring) % + qp->uk_attrs->max_hw_sq_chunk; + if (nop_cnt) + nop_cnt = qp->uk_attrs->max_hw_sq_chunk - nop_cnt; + + if (quanta > nop_cnt) { + /* Need to pad with NOP */ + /* Make sure SQ has room for nop_cnt + quanta */ + if ((u32)(quanta + nop_cnt) > + IRDMA_SQ_RING_FREE_QUANTA(qp->sq_ring)) + return NULL; + + /* pad with NOP */ + for (i = 0; i < nop_cnt; i++) { + irdma_nop_1(qp); + IRDMA_RING_MOVE_HEAD_NOCHECK(qp->sq_ring); + } + info->push_wqe = false; + } else { + /* no need to pad with NOP */ + if (quanta > IRDMA_SQ_RING_FREE_QUANTA(qp->sq_ring)) + return NULL; + } + + *wqe_idx = IRDMA_RING_CURRENT_HEAD(qp->sq_ring); + if (!*wqe_idx) + qp->swqe_polarity = !qp->swqe_polarity; + + IRDMA_RING_MOVE_HEAD_BY_COUNT_NOCHECK(qp->sq_ring, quanta); + + wqe = qp->sq_base[*wqe_idx].elem; + if (qp->uk_attrs->hw_rev == IRDMA_GEN_1 && quanta == 1 && + (IRDMA_RING_CURRENT_HEAD(qp->sq_ring) & 1)) { + wqe_0 = qp->sq_base[IRDMA_RING_CURRENT_HEAD(qp->sq_ring)].elem; + wqe_0[3] = cpu_to_le64(LS_64(!qp->swqe_polarity, IRDMAQPSQ_VALID)); + } + qp->sq_wrtrk_array[*wqe_idx].wrid = info->wr_id; + qp->sq_wrtrk_array[*wqe_idx].wr_len = total_size; + qp->sq_wrtrk_array[*wqe_idx].quanta = quanta; + + return wqe; +} + +/** + * irdma_qp_get_next_recv_wqe - get next qp's rcv wqe + * @qp: hw qp ptr + * @wqe_idx: return wqe index + */ +__le64 *irdma_qp_get_next_recv_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx) +{ + __le64 *wqe; + enum irdma_status_code ret_code; + + if (IRDMA_RING_FULL_ERR(qp->rq_ring)) + return NULL; + + IRDMA_ATOMIC_RING_MOVE_HEAD(qp->rq_ring, *wqe_idx, ret_code); + if (ret_code) + return NULL; + + if (!*wqe_idx) + qp->rwqe_polarity = !qp->rwqe_polarity; + /* rq_wqe_size_multiplier is no of 32 byte quanta in in one rq wqe */ + wqe = qp->rq_base[*wqe_idx * qp->rq_wqe_size_multiplier].elem; + + return wqe; +} + +/** + * irdma_rdma_write - rdma write operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code irdma_rdma_write(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq) +{ + u64 hdr; + __le64 *wqe; + struct irdma_rdma_write *op_info; + u32 i, wqe_idx; + u32 total_size = 0, byte_off; + enum irdma_status_code ret_code; + u32 frag_cnt, addl_frag_cnt; + bool read_fence = false; + u16 quanta; + + info->push_wqe = qp->push_db ? true : false; + + op_info = &info->op.rdma_write; + if (op_info->num_lo_sges > qp->max_sq_frag_cnt) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + for (i = 0; i < op_info->num_lo_sges; i++) + total_size += op_info->lo_sg_list[i].len; + + read_fence |= info->read_fence; + + if (info->imm_data_valid) + frag_cnt = op_info->num_lo_sges + 1; + else + frag_cnt = op_info->num_lo_sges; + addl_frag_cnt = frag_cnt > 1 ? (frag_cnt - 1) : 0; + ret_code = irdma_fragcnt_to_quanta_sq(frag_cnt, &quanta); + if (ret_code) + return ret_code; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size, + info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + set_64bit_val(wqe, 16, + LS_64(op_info->rem_addr.tag_off, IRDMAQPSQ_FRAG_TO)); + + if (info->imm_data_valid) { + set_64bit_val(wqe, 0, + LS_64(info->imm_data, IRDMAQPSQ_IMMDATA)); + i = 0; + } else { + qp->wqe_ops.iw_set_fragment(wqe, 0, + op_info->lo_sg_list, + qp->swqe_polarity); + i = 1; + } + + for (byte_off = 32; i < op_info->num_lo_sges; i++) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, + &op_info->lo_sg_list[i], + qp->swqe_polarity); + byte_off += 16; + } + + /* if not an odd number set valid bit in next fragment */ + if (qp->uk_attrs->hw_rev > IRDMA_GEN_1 && !(frag_cnt & 0x01) && + frag_cnt) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, NULL, + qp->swqe_polarity); + if (qp->uk_attrs->hw_rev == IRDMA_GEN_2) + ++addl_frag_cnt; + } + + hdr = LS_64(op_info->rem_addr.stag, IRDMAQPSQ_REMSTAG) | + LS_64(info->op_type, IRDMAQPSQ_OPCODE) | + LS_64((info->imm_data_valid ? 1 : 0), IRDMAQPSQ_IMMDATAFLAG) | + LS_64((info->report_rtt ? 1 : 0), IRDMAQPSQ_REPORTRTT) | + LS_64(addl_frag_cnt, IRDMAQPSQ_ADDFRAGCNT) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(read_fence, IRDMAQPSQ_READFENCE) | + LS_64(info->local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, quanta, wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_rdma_read - rdma read command + * @qp: hw qp ptr + * @info: post sq information + * @inv_stag: flag for inv_stag + * @post_sq: flag to post sq + */ +static enum irdma_status_code irdma_rdma_read(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool inv_stag, bool post_sq) +{ + struct irdma_rdma_read *op_info; + enum irdma_status_code ret_code; + u32 i, byte_off, total_size = 0; + bool local_fence = false; + u32 addl_frag_cnt; + __le64 *wqe; + u32 wqe_idx; + u16 quanta; + u64 hdr; + + info->push_wqe = qp->push_db ? true : false; + + op_info = &info->op.rdma_read; + if (qp->max_sq_frag_cnt < op_info->num_lo_sges) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + for (i = 0; i < op_info->num_lo_sges; i++) + total_size += op_info->lo_sg_list[i].len; + + ret_code = irdma_fragcnt_to_quanta_sq(op_info->num_lo_sges, &quanta); + if (ret_code) + return ret_code; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size, + info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + addl_frag_cnt = op_info->num_lo_sges > 1 ? + (op_info->num_lo_sges - 1) : 0; + local_fence |= info->local_fence; + + qp->wqe_ops.iw_set_fragment(wqe, 0, op_info->lo_sg_list, + qp->swqe_polarity); + for (i = 1, byte_off = 32; i < op_info->num_lo_sges; ++i) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, + &op_info->lo_sg_list[i], + qp->swqe_polarity); + byte_off += 16; + } + + /* if not an odd number set valid bit in next fragment */ + if (qp->uk_attrs->hw_rev > IRDMA_GEN_1 && + !(op_info->num_lo_sges & 0x01) && op_info->num_lo_sges) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, NULL, + qp->swqe_polarity); + if (qp->uk_attrs->hw_rev == IRDMA_GEN_2) + ++addl_frag_cnt; + } + set_64bit_val(wqe, 16, + LS_64(op_info->rem_addr.tag_off, IRDMAQPSQ_FRAG_TO)); + hdr = LS_64(op_info->rem_addr.stag, IRDMAQPSQ_REMSTAG) | + LS_64((info->report_rtt ? 1 : 0), IRDMAQPSQ_REPORTRTT) | + LS_64(addl_frag_cnt, IRDMAQPSQ_ADDFRAGCNT) | + LS_64((inv_stag ? IRDMAQP_OP_RDMA_READ_LOC_INV : IRDMAQP_OP_RDMA_READ), + IRDMAQPSQ_OPCODE) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(info->read_fence, IRDMAQPSQ_READFENCE) | + LS_64(local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, quanta, wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_send - rdma send command + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code irdma_send(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq) +{ + __le64 *wqe; + struct irdma_post_send *op_info; + u64 hdr; + u32 i, wqe_idx, total_size = 0, byte_off; + enum irdma_status_code ret_code; + u32 frag_cnt, addl_frag_cnt; + bool read_fence = false; + u16 quanta; + + info->push_wqe = qp->push_db ? true : false; + + op_info = &info->op.send; + if (qp->max_sq_frag_cnt < op_info->num_sges) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + for (i = 0; i < op_info->num_sges; i++) + total_size += op_info->sg_list[i].len; + + if (info->imm_data_valid) + frag_cnt = op_info->num_sges + 1; + else + frag_cnt = op_info->num_sges; + ret_code = irdma_fragcnt_to_quanta_sq(frag_cnt, &quanta); + if (ret_code) + return ret_code; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size, + info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + read_fence |= info->read_fence; + addl_frag_cnt = frag_cnt > 1 ? (frag_cnt - 1) : 0; + if (info->imm_data_valid) { + set_64bit_val(wqe, 0, + LS_64(info->imm_data, IRDMAQPSQ_IMMDATA)); + i = 0; + } else { + qp->wqe_ops.iw_set_fragment(wqe, 0, op_info->sg_list, + qp->swqe_polarity); + i = 1; + } + + for (byte_off = 32; i < op_info->num_sges; i++) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, &op_info->sg_list[i], + qp->swqe_polarity); + byte_off += 16; + } + + /* if not an odd number set valid bit in next fragment */ + if (qp->uk_attrs->hw_rev > IRDMA_GEN_1 && !(frag_cnt & 0x01) && + frag_cnt) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, NULL, + qp->swqe_polarity); + if (qp->uk_attrs->hw_rev == IRDMA_GEN_2) + ++addl_frag_cnt; + } + + set_64bit_val(wqe, 16, + LS_64(op_info->qkey, IRDMAQPSQ_DESTQKEY) | + LS_64(op_info->dest_qp, IRDMAQPSQ_DESTQPN)); + hdr = LS_64(info->stag_to_inv, IRDMAQPSQ_REMSTAG) | + LS_64(op_info->ah_id, IRDMAQPSQ_AHID) | + LS_64((info->imm_data_valid ? 1 : 0), IRDMAQPSQ_IMMDATAFLAG) | + LS_64((info->report_rtt ? 1 : 0), IRDMAQPSQ_REPORTRTT) | + LS_64(info->op_type, IRDMAQPSQ_OPCODE) | + LS_64(addl_frag_cnt, IRDMAQPSQ_ADDFRAGCNT) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(read_fence, IRDMAQPSQ_READFENCE) | + LS_64(info->local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(info->udp_hdr, IRDMAQPSQ_UDPHEADER) | + LS_64(info->l4len, IRDMAQPSQ_L4LEN) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, quanta, wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_set_mw_bind_wqe_gen_1 - set mw bind wqe + * @wqe: wqe for setting fragment + * @op_info: info for setting bind wqe values + */ +static void irdma_set_mw_bind_wqe_gen_1(__le64 *wqe, + struct irdma_bind_window *op_info) +{ + set_64bit_val(wqe, 0, (uintptr_t)op_info->va); + set_64bit_val(wqe, 8, + LS_64(op_info->mw_stag, IRDMAQPSQ_PARENTMRSTAG) | + LS_64(op_info->mr_stag, IRDMAQPSQ_MWSTAG)); + set_64bit_val(wqe, 16, op_info->bind_len); +} + +/** + * irdma_copy_inline_data_gen_1 - Copy inline data to wqe + * @dest: pointer to wqe + * @src: pointer to inline data + * @len: length of inline data to copy + * @polarity: compatibility parameter + */ +static void irdma_copy_inline_data_gen_1(u8 *dest, u8 *src, u32 len, + u8 polarity) +{ + if (len <= 16) { + memcpy(dest, src, len); + } else { + memcpy(dest, src, 16); + src += 16; + dest = dest + 32; + memcpy(dest, src, len - 16); + } +} + +/** + * irdma_inline_data_size_to_quanta_gen_1 - based on inline data, quanta + * @data_size: data size for inline + * @quanta: size of sq wqe returned + * @max_size: maximum allowed inline size + * + * Gets the quanta based on inline and immediate data. + */ +static enum irdma_status_code +irdma_inline_data_size_to_quanta_gen_1(u32 data_size, u16 *quanta, u32 max_size) +{ + if (data_size > max_size) + return IRDMA_ERR_INVALID_INLINE_DATA_SIZE; + + if (data_size <= 16) + *quanta = IRDMA_QP_WQE_MIN_QUANTA; + else + *quanta = 2; + + return 0; +} + +/** + * irdma_set_mw_bind_wqe - set mw bind in wqe + * @wqe: wqe for setting mw bind + * @op_info: info for setting wqe values + */ +static void irdma_set_mw_bind_wqe(__le64 *wqe, + struct irdma_bind_window *op_info) +{ + set_64bit_val(wqe, 0, (uintptr_t)op_info->va); + set_64bit_val(wqe, 8, + LS_64(op_info->mr_stag, IRDMAQPSQ_PARENTMRSTAG) | + LS_64(op_info->mw_stag, IRDMAQPSQ_MWSTAG)); + set_64bit_val(wqe, 16, op_info->bind_len); +} + +/** + * irdma_copy_inline_data - Copy inline data to wqe + * @dest: pointer to wqe + * @src: pointer to inline data + * @len: length of inline data to copy + * @polarity: polarity of wqe valid bit + */ +static void irdma_copy_inline_data(u8 *dest, u8 *src, u32 len, u8 polarity) +{ + u8 inline_valid = polarity << IRDMA_INLINE_VALID_S; + u32 copy_size; + + dest += 8; + if (len <= 8) { + memcpy(dest, src, len); + return; + } + + *((u64 *)dest) = *((u64 *)src); + len -= 8; + src += 8; + dest += 24; /* point to additional 32 byte quanta */ + + while (len) { + copy_size = len < 31 ? len : 31; + memcpy(dest, src, copy_size); + *(dest + 31) = inline_valid; + len -= copy_size; + dest += 32; + src += copy_size; + } +} + +/** + * irdma_inline_data_size_to_quanta - based on inline data, quanta + * @data_size: data size for inline + * @quanta: size of sq wqe returned + * @max_size: maximum allowed inline size + * + * Gets the quanta based on inline and immediate data. + */ +static enum irdma_status_code +irdma_inline_data_size_to_quanta(u32 data_size, u16 *quanta, u32 max_size) +{ + if (data_size > max_size) + return IRDMA_ERR_INVALID_INLINE_DATA_SIZE; + + if (data_size <= 8) + *quanta = IRDMA_QP_WQE_MIN_QUANTA; + else if (data_size <= 39) + *quanta = 2; + else if (data_size <= 70) + *quanta = 3; + else if (data_size <= 101) + *quanta = 4; + else if (data_size <= 132) + *quanta = 5; + else if (data_size <= 163) + *quanta = 6; + else if (data_size <= 194) + *quanta = 7; + else + *quanta = 8; + + return 0; +} + +/** + * irdma_inline_rdma_write - inline rdma write operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code +irdma_inline_rdma_write(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info, + bool post_sq) +{ + __le64 *wqe; + struct irdma_inline_rdma_write *op_info; + u64 hdr = 0; + u32 wqe_idx; + enum irdma_status_code ret_code; + bool read_fence = false; + u16 quanta; + + info->push_wqe = qp->push_db ? true : false; + op_info = &info->op.inline_rdma_write; + ret_code = qp->wqe_ops.iw_inline_data_size_to_quanta(op_info->len, &quanta, + qp->uk_attrs->max_hw_inline); + if (ret_code) + return ret_code; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, op_info->len, + info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + read_fence |= info->read_fence; + set_64bit_val(wqe, 16, + LS_64(op_info->rem_addr.tag_off, IRDMAQPSQ_FRAG_TO)); + + hdr = LS_64(op_info->rem_addr.stag, IRDMAQPSQ_REMSTAG) | + LS_64(info->op_type, IRDMAQPSQ_OPCODE) | + LS_64(op_info->len, IRDMAQPSQ_INLINEDATALEN) | + LS_64(info->report_rtt ? 1 : 0, IRDMAQPSQ_REPORTRTT) | + LS_64(1, IRDMAQPSQ_INLINEDATAFLAG) | + LS_64(info->imm_data_valid ? 1 : 0, IRDMAQPSQ_IMMDATAFLAG) | + LS_64(info->push_wqe ? 1 : 0, IRDMAQPSQ_PUSHWQE) | + LS_64(read_fence, IRDMAQPSQ_READFENCE) | + LS_64(info->local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + if (info->imm_data_valid) + set_64bit_val(wqe, 0, + LS_64(info->imm_data, IRDMAQPSQ_IMMDATA)); + + qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->data, op_info->len, + qp->swqe_polarity); + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, quanta, wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_inline_send - inline send operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code irdma_inline_send(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq) +{ + __le64 *wqe; + struct irdma_post_inline_send *op_info; + u64 hdr; + u32 wqe_idx; + enum irdma_status_code ret_code; + bool read_fence = false; + u16 quanta; + + info->push_wqe = qp->push_db ? true : false; + op_info = &info->op.inline_send; + + ret_code = qp->wqe_ops.iw_inline_data_size_to_quanta(op_info->len, + &quanta, + qp->uk_attrs->max_hw_inline); + if (ret_code) + return ret_code; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, op_info->len, + info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + set_64bit_val(wqe, 16, + LS_64(op_info->qkey, IRDMAQPSQ_DESTQKEY) | + LS_64(op_info->dest_qp, IRDMAQPSQ_DESTQPN)); + + read_fence |= info->read_fence; + hdr = LS_64(info->stag_to_inv, IRDMAQPSQ_REMSTAG) | + LS_64(op_info->ah_id, IRDMAQPSQ_AHID) | + LS_64(info->op_type, IRDMAQPSQ_OPCODE) | + LS_64(op_info->len, IRDMAQPSQ_INLINEDATALEN) | + LS_64((info->imm_data_valid ? 1 : 0), IRDMAQPSQ_IMMDATAFLAG) | + LS_64((info->report_rtt ? 1 : 0), IRDMAQPSQ_REPORTRTT) | + LS_64(1, IRDMAQPSQ_INLINEDATAFLAG) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(read_fence, IRDMAQPSQ_READFENCE) | + LS_64(info->local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(info->udp_hdr, IRDMAQPSQ_UDPHEADER) | + LS_64(info->l4len, IRDMAQPSQ_L4LEN) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + if (info->imm_data_valid) + set_64bit_val(wqe, 0, + LS_64(info->imm_data, IRDMAQPSQ_IMMDATA)); + qp->wqe_ops.iw_copy_inline_data((u8 *)wqe, op_info->data, op_info->len, + qp->swqe_polarity); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, quanta, wqe_idx, post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_stag_local_invalidate - stag invalidate operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code +irdma_stag_local_invalidate(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, bool post_sq) +{ + __le64 *wqe; + struct irdma_inv_local_stag *op_info; + u64 hdr; + u32 wqe_idx; + bool local_fence = false; + struct irdma_sge sge = {}; + + info->push_wqe = qp->push_db ? true : false; + op_info = &info->op.inv_local_stag; + local_fence = info->local_fence; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, IRDMA_QP_WQE_MIN_QUANTA, + 0, info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + sge.stag = op_info->target_stag; + qp->wqe_ops.iw_set_fragment(wqe, 0, &sge, 0); + + set_64bit_val(wqe, 16, 0); + + hdr = LS_64(IRDMA_OP_TYPE_INV_STAG, IRDMAQPSQ_OPCODE) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(info->read_fence, IRDMAQPSQ_READFENCE) | + LS_64(local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, IRDMA_QP_WQE_MIN_QUANTA, wqe_idx, + post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_mw_bind - bind Memory Window + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +static enum irdma_status_code irdma_mw_bind(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq) +{ + __le64 *wqe; + struct irdma_bind_window *op_info; + u64 hdr; + u32 wqe_idx; + bool local_fence = false; + + info->push_wqe = qp->push_db ? true : false; + op_info = &info->op.bind_window; + local_fence |= info->local_fence; + + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, IRDMA_QP_WQE_MIN_QUANTA, + 0, info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + qp->wqe_ops.iw_set_mw_bind_wqe(wqe, op_info); + + hdr = LS_64(IRDMA_OP_TYPE_BIND_MW, IRDMAQPSQ_OPCODE) | + LS_64(((op_info->ena_reads << 2) | (op_info->ena_writes << 3)), + IRDMAQPSQ_STAGRIGHTS) | + LS_64((op_info->addressing_type == IRDMA_ADDR_TYPE_VA_BASED ? 1 : 0), + IRDMAQPSQ_VABASEDTO) | + LS_64((op_info->mem_window_type_1 ? 1 : 0), + IRDMAQPSQ_MEMWINDOWTYPE) | + LS_64((info->push_wqe ? 1 : 0), IRDMAQPSQ_PUSHWQE) | + LS_64(info->read_fence, IRDMAQPSQ_READFENCE) | + LS_64(local_fence, IRDMAQPSQ_LOCALFENCE) | + LS_64(info->signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (info->push_wqe) { + irdma_qp_push_wqe(qp, wqe, IRDMA_QP_WQE_MIN_QUANTA, wqe_idx, + post_sq); + } else { + if (post_sq) + irdma_qp_post_wr(qp); + } + + return 0; +} + +/** + * irdma_post_receive - post receive wqe + * @qp: hw qp ptr + * @info: post rq information + */ +static enum irdma_status_code +irdma_post_receive(struct irdma_qp_uk *qp, struct irdma_post_rq_info *info) +{ + u32 total_size = 0, wqe_idx, i, byte_off; + u32 addl_frag_cnt; + __le64 *wqe; + u64 hdr; + + if (qp->max_rq_frag_cnt < info->num_sges) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + for (i = 0; i < info->num_sges; i++) + total_size += info->sg_list[i].len; + + wqe = irdma_qp_get_next_recv_wqe(qp, &wqe_idx); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + qp->rq_wrid_array[wqe_idx] = info->wr_id; + addl_frag_cnt = info->num_sges > 1 ? (info->num_sges - 1) : 0; + qp->wqe_ops.iw_set_fragment(wqe, 0, info->sg_list, + qp->rwqe_polarity); + + for (i = 1, byte_off = 32; i < info->num_sges; i++) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, &info->sg_list[i], + qp->rwqe_polarity); + byte_off += 16; + } + + /* if not an odd number set valid bit in next fragment */ + if (qp->uk_attrs->hw_rev > IRDMA_GEN_1 && !(info->num_sges & 0x01) && + info->num_sges) { + qp->wqe_ops.iw_set_fragment(wqe, byte_off, NULL, + qp->rwqe_polarity); + if (qp->uk_attrs->hw_rev == IRDMA_GEN_2) + ++addl_frag_cnt; + } + + set_64bit_val(wqe, 16, 0); + hdr = LS_64(addl_frag_cnt, IRDMAQPSQ_ADDFRAGCNT) | + LS_64(qp->rwqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + return 0; +} + +/** + * irdma_cq_resize - reset the cq buffer info + * @cq: cq to resize + * @cq_base: new cq buffer addr + * @cq_size: number of cqes + */ +static void irdma_cq_resize(struct irdma_cq_uk *cq, void *cq_base, int cq_size) +{ + cq->cq_base = cq_base; + cq->cq_size = cq_size; + IRDMA_RING_INIT(cq->cq_ring, cq->cq_size); + cq->polarity = 1; +} + +/** + * irdma_cq_set_resized_cnt - record the count of the resized buffers + * @cq: cq to resize + * @cq_cnt: the count of the resized cq buffers + */ +static void irdma_cq_set_resized_cnt(struct irdma_cq_uk *cq, u16 cq_cnt) +{ + u64 temp_val; + u16 sw_cq_sel; + u8 arm_next_se; + u8 arm_next; + u8 arm_seq_num; + + get_64bit_val(cq->shadow_area, 32, &temp_val); + + sw_cq_sel = (u16)RS_64(temp_val, IRDMA_CQ_DBSA_SW_CQ_SELECT); + sw_cq_sel += cq_cnt; + + arm_seq_num = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_SEQ_NUM); + arm_next_se = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_NEXT_SE); + arm_next = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_NEXT); + + temp_val = LS_64(arm_seq_num, IRDMA_CQ_DBSA_ARM_SEQ_NUM) | + LS_64(sw_cq_sel, IRDMA_CQ_DBSA_SW_CQ_SELECT) | + LS_64(arm_next_se, IRDMA_CQ_DBSA_ARM_NEXT_SE) | + LS_64(arm_next, IRDMA_CQ_DBSA_ARM_NEXT); + + set_64bit_val(cq->shadow_area, 32, temp_val); +} + +/** + * irdma_cq_request_notification - cq notification request (door bell) + * @cq: hw cq + * @cq_notify: notification type + */ +static void irdma_cq_request_notification(struct irdma_cq_uk *cq, + enum irdma_cmpl_notify cq_notify) +{ + u64 temp_val; + u16 sw_cq_sel; + u8 arm_next_se = 0; + u8 arm_next = 0; + u8 arm_seq_num; + + get_64bit_val(cq->shadow_area, 32, &temp_val); + arm_seq_num = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_SEQ_NUM); + arm_seq_num++; + sw_cq_sel = (u16)RS_64(temp_val, IRDMA_CQ_DBSA_SW_CQ_SELECT); + arm_next_se = (u8)RS_64(temp_val, IRDMA_CQ_DBSA_ARM_NEXT_SE); + arm_next_se |= 1; + if (cq_notify == IRDMA_CQ_COMPL_EVENT) + arm_next = 1; + temp_val = LS_64(arm_seq_num, IRDMA_CQ_DBSA_ARM_SEQ_NUM) | + LS_64(sw_cq_sel, IRDMA_CQ_DBSA_SW_CQ_SELECT) | + LS_64(arm_next_se, IRDMA_CQ_DBSA_ARM_NEXT_SE) | + LS_64(arm_next, IRDMA_CQ_DBSA_ARM_NEXT); + + set_64bit_val(cq->shadow_area, 32, temp_val); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + writel(cq->cq_id, cq->cqe_alloc_db); +} + +/** + * irdma_cq_post_entries - update tail in shadow memory + * @cq: hw cq + * @count: # of entries processed + */ +static enum irdma_status_code irdma_cq_post_entries(struct irdma_cq_uk *cq, + u8 count) +{ + IRDMA_RING_MOVE_TAIL_BY_COUNT(cq->cq_ring, count); + set_64bit_val(cq->shadow_area, 0, + IRDMA_RING_CURRENT_HEAD(cq->cq_ring)); + + return 0; +} + +/** + * irdma_cq_poll_cmpl - get cq completion info + * @cq: hw cq + * @info: cq poll information returned + */ +static enum irdma_status_code +irdma_cq_poll_cmpl(struct irdma_cq_uk *cq, struct irdma_cq_poll_info *info) +{ + u64 comp_ctx, qword0, qword2, qword3, qword4, qword6, qword7, wqe_qword; + __le64 *cqe, *sw_wqe; + struct irdma_qp_uk *qp; + struct irdma_ring *pring = NULL; + u32 wqe_idx, q_type, array_idx = 0; + enum irdma_status_code ret_code = 0; + bool move_cq_head = true; + u8 polarity; + bool ext_valid; + __le64 *ext_cqe; + u32 peek_head; + unsigned long flags = 0; + + if (cq->avoid_mem_cflct) + cqe = IRDMA_GET_CURRENT_EXTENDED_CQ_ELEM(cq); + else + cqe = IRDMA_GET_CURRENT_CQ_ELEM(cq); + + get_64bit_val(cqe, 24, &qword3); + polarity = (u8)RS_64(qword3, IRDMA_CQ_VALID); + if (polarity != cq->polarity) + return IRDMA_ERR_Q_EMPTY; + + /* Ensure CQE contents are read after valid bit is checked */ + dma_rmb(); + + ext_valid = (bool)RS_64(qword3, IRDMA_CQ_EXTCQE); + if (ext_valid) { + if (cq->avoid_mem_cflct) { + ext_cqe = (__le64 *)((u8 *)cqe + 32); + get_64bit_val(ext_cqe, 24, &qword7); + polarity = (u8)RS_64(qword7, IRDMA_CQ_VALID); + } else { + peek_head = (cq->cq_ring.head + 1) % cq->cq_ring.size; + ext_cqe = cq->cq_base[peek_head].buf; + get_64bit_val(ext_cqe, 24, &qword7); + polarity = (u8)RS_64(qword7, IRDMA_CQ_VALID); + if (!peek_head) + polarity ^= 1; + } + if (polarity != cq->polarity) + return IRDMA_ERR_Q_EMPTY; + + /* Ensure ext CQE contents are read after ext valid bit is checked */ + dma_rmb(); + + info->imm_valid = (bool)RS_64(qword7, IRDMA_CQ_IMMVALID); + if (info->imm_valid) { + get_64bit_val(ext_cqe, 0, &qword4); + info->imm_data = (u32)RS_64(qword4, IRDMA_CQ_IMMDATALOW32); + } + info->ud_smac_valid = (bool)RS_64(qword7, IRDMA_CQ_UDSMACVALID); + info->ud_vlan_valid = (bool)RS_64(qword7, IRDMA_CQ_UDVLANVALID); + if (info->ud_smac_valid || info->ud_vlan_valid) { + get_64bit_val(ext_cqe, 16, &qword6); + if (info->ud_vlan_valid) + info->ud_vlan = (u16)RS_64(qword6, IRDMA_CQ_UDVLAN); + if (info->ud_smac_valid) { + info->ud_smac[5] = qword6 & 0xFF; + info->ud_smac[4] = (qword6 >> 8) & 0xFF; + info->ud_smac[3] = (qword6 >> 16) & 0xFF; + info->ud_smac[2] = (qword6 >> 24) & 0xFF; + info->ud_smac[1] = (qword6 >> 32) & 0xFF; + info->ud_smac[0] = (qword6 >> 40) & 0xFF; + } + } + } else { + info->imm_valid = false; + info->ud_smac_valid = false; + info->ud_vlan_valid = false; + } + + q_type = (u8)RS_64(qword3, IRDMA_CQ_SQ); + info->error = (bool)RS_64(qword3, IRDMA_CQ_ERROR); + info->push_dropped = (bool)RS_64(qword3, IRDMACQ_PSHDROP); + info->ipv4 = (bool)RS_64(qword3, IRDMACQ_IPV4); + if (info->error) { + info->major_err = RS_64(qword3, IRDMA_CQ_MAJERR); + info->minor_err = RS_64(qword3, IRDMA_CQ_MINERR); + if (info->major_err == IRDMA_FLUSH_MAJOR_ERR) + info->comp_status = IRDMA_COMPL_STATUS_FLUSHED; + else if (info->major_err == IRDMA_LEN_MAJOR_ERR) + info->comp_status = IRDMA_COMPL_STATUS_INVALID_LEN; + else + info->comp_status = IRDMA_COMPL_STATUS_UNKNOWN; + } else { + info->comp_status = IRDMA_COMPL_STATUS_SUCCESS; + } + + get_64bit_val(cqe, 0, &qword0); + get_64bit_val(cqe, 16, &qword2); + + info->tcp_seq_num_rtt = (u32)RS_64(qword0, IRDMACQ_TCPSEQNUMRTT); + info->qp_id = (u32)RS_64(qword2, IRDMACQ_QPID); + info->ud_src_qpn = (u32)RS_64(qword2, IRDMACQ_UDSRCQPN); + + get_64bit_val(cqe, 8, &comp_ctx); + + info->solicited_event = (bool)RS_64(qword3, IRDMACQ_SOEVENT); + + qp = (struct irdma_qp_uk *)(unsigned long)comp_ctx; + if (!qp) { + ret_code = IRDMA_ERR_Q_DESTROYED; + goto exit; + } + wqe_idx = (u32)RS_64(qword3, IRDMA_CQ_WQEIDX); + info->qp_handle = (irdma_qp_handle)(unsigned long)qp; + + if (q_type == IRDMA_CQE_QTYPE_RQ) { + array_idx = wqe_idx / qp->rq_wqe_size_multiplier; + if (info->comp_status == IRDMA_COMPL_STATUS_FLUSHED || + info->comp_status == IRDMA_COMPL_STATUS_INVALID_LEN) { + if (!IRDMA_RING_MORE_WORK(qp->rq_ring)) { + ret_code = IRDMA_ERR_Q_EMPTY; + goto exit; + } + + info->wr_id = qp->rq_wrid_array[qp->rq_ring.tail]; + array_idx = qp->rq_ring.tail; + } else { + info->wr_id = qp->rq_wrid_array[array_idx]; + } + + if (info->imm_valid) + info->op_type = IRDMA_OP_TYPE_REC_IMM; + else + info->op_type = IRDMA_OP_TYPE_REC; + if (qword3 & IRDMACQ_STAG_M) { + info->stag_invalid_set = true; + info->inv_stag = (u32)RS_64(qword2, IRDMACQ_INVSTAG); + } else { + info->stag_invalid_set = false; + } + info->bytes_xfered = (u32)RS_64(qword0, IRDMACQ_PAYLDLEN); + IRDMA_RING_SET_TAIL(qp->rq_ring, array_idx + 1); + if (info->comp_status == IRDMA_COMPL_STATUS_FLUSHED) { + spin_lock_irqsave(qp->lock, flags); + if (!IRDMA_RING_MORE_WORK(qp->rq_ring)) + qp->rq_flush_complete = true; + else + move_cq_head = false; + spin_unlock_irqrestore(qp->lock, flags); + } + pring = &qp->rq_ring; + } else { /* q_type is IRDMA_CQE_QTYPE_SQ */ + if (qp->first_sq_wq) { + qp->first_sq_wq = false; + if (!wqe_idx && qp->sq_ring.head == qp->sq_ring.tail) { + IRDMA_RING_MOVE_HEAD_NOCHECK(cq->cq_ring); + IRDMA_RING_MOVE_TAIL(cq->cq_ring); + set_64bit_val(cq->shadow_area, 0, + IRDMA_RING_CURRENT_HEAD(cq->cq_ring)); + memset(info, 0, + sizeof(struct irdma_cq_poll_info)); + return irdma_cq_poll_cmpl(cq, info); + } + } + /*cease posting push mode on push drop*/ + if (info->push_dropped) + qp->push_mode = false; + + if (info->comp_status != IRDMA_COMPL_STATUS_FLUSHED) { + info->wr_id = qp->sq_wrtrk_array[wqe_idx].wrid; + if (!info->comp_status) + info->bytes_xfered = qp->sq_wrtrk_array[wqe_idx].wr_len; + info->op_type = (u8)RS_64(qword3, IRDMACQ_OP); + sw_wqe = qp->sq_base[wqe_idx].elem; + get_64bit_val(sw_wqe, 24, &wqe_qword); + IRDMA_RING_SET_TAIL(qp->sq_ring, + wqe_idx + qp->sq_wrtrk_array[wqe_idx].quanta); + } else { + if (!IRDMA_RING_MORE_WORK(qp->sq_ring)) { + ret_code = IRDMA_ERR_Q_EMPTY; + goto exit; + } + + do { + u8 op_type; + u32 tail; + + tail = qp->sq_ring.tail; + sw_wqe = qp->sq_base[tail].elem; + get_64bit_val(sw_wqe, 24, + &wqe_qword); + op_type = (u8)RS_64(wqe_qword, IRDMAQPSQ_OPCODE); + info->op_type = op_type; + IRDMA_RING_SET_TAIL(qp->sq_ring, + tail + qp->sq_wrtrk_array[tail].quanta); + if (op_type != IRDMAQP_OP_NOP) { + info->wr_id = qp->sq_wrtrk_array[tail].wrid; + info->bytes_xfered = qp->sq_wrtrk_array[tail].wr_len; + break; + } + } while (1); + spin_lock_irqsave(qp->lock, flags); + if (!IRDMA_RING_MORE_WORK(qp->sq_ring)) + qp->sq_flush_complete = true; + else + move_cq_head = false; + spin_unlock_irqrestore(qp->lock, flags); + } + pring = &qp->sq_ring; + } + + ret_code = 0; + +exit: + + if (move_cq_head) { + IRDMA_RING_MOVE_HEAD_NOCHECK(cq->cq_ring); + if (!IRDMA_RING_CURRENT_HEAD(cq->cq_ring)) + cq->polarity ^= 1; + + if (ext_valid && !cq->avoid_mem_cflct) { + IRDMA_RING_MOVE_HEAD_NOCHECK(cq->cq_ring); + if (!IRDMA_RING_CURRENT_HEAD(cq->cq_ring)) + cq->polarity ^= 1; + } + + IRDMA_RING_MOVE_TAIL(cq->cq_ring); + if (!cq->avoid_mem_cflct && ext_valid) + IRDMA_RING_MOVE_TAIL(cq->cq_ring); + set_64bit_val(cq->shadow_area, 0, + IRDMA_RING_CURRENT_HEAD(cq->cq_ring)); + } else { + qword3 &= ~IRDMA_CQ_WQEIDX_M; + qword3 |= LS_64(pring->tail, IRDMA_CQ_WQEIDX); + set_64bit_val(cqe, 24, qword3); + } + + return ret_code; +} + +/** + * irdma_qp_roundup - return round up qp wq depth + * @wqdepth: wq depth in quanta to round up + */ +static int irdma_qp_round_up(u32 wqdepth) +{ + int scount = 1; + + for (wqdepth--; scount <= 16; scount *= 2) + wqdepth |= wqdepth >> scount; + + return ++wqdepth; +} + +/** + * irdma_get_wqe_shift - get shift count for maximum wqe size + * @uk_attrs: qp HW attributes + * @sge: Maximum Scatter Gather Elements wqe + * @inline_data: Maximum inline data size + * @shift: Returns the shift needed based on sge + * + * Shift can be used to left shift the wqe size based on number of SGEs and inlind data size. + * For 1 SGE or inline data <= 8, shift = 0 (wqe size of 32 + * bytes). For 2 or 3 SGEs or inline data <= 39, shift = 1 (wqe + * size of 64 bytes). + * For 4-7 SGE's and inline <= 101 Shift of 2 otherwise (wqe + * size of 256 bytes). + */ +void irdma_get_wqe_shift(struct irdma_uk_attrs *uk_attrs, u32 sge, + u32 inline_data, u8 *shift) +{ + *shift = 0; + if (uk_attrs->hw_rev > IRDMA_GEN_1) { + if (sge > 1 || inline_data > 8) { + if (sge < 4 && inline_data <= 39) + *shift = 1; + else if (sge < 8 && inline_data <= 101) + *shift = 2; + else + *shift = 3; + } + } else if (sge > 1 || inline_data > 16) { + *shift = (sge < 4 && inline_data <= 48) ? 1 : 2; + } +} + +/* + * irdma_get_sqdepth - get SQ depth (quanta) + * @uk_attrs: qp HW attributes + * @sq_size: SQ size + * @shift: shift which determines size of WQE + * @sqdepth: depth of SQ + * + */ +enum irdma_status_code irdma_get_sqdepth(struct irdma_uk_attrs *uk_attrs, + u32 sq_size, u8 shift, u32 *sqdepth) +{ + *sqdepth = irdma_qp_round_up((sq_size << shift) + IRDMA_SQ_RSVD); + + if (*sqdepth < (IRDMA_QP_SW_MIN_WQSIZE << shift)) + *sqdepth = IRDMA_QP_SW_MIN_WQSIZE << shift; + else if (*sqdepth > uk_attrs->max_hw_wq_quanta) + return IRDMA_ERR_INVALID_SIZE; + + return 0; +} + +/* + * irdma_get_rqdepth - get RQ depth (quanta) + * @uk_attrs: qp HW attributes + * @rq_size: RQ size + * @shift: shift which determines size of WQE + * @rqdepth: depth of RQ + */ +enum irdma_status_code irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, + u32 rq_size, u8 shift, u32 *rqdepth) +{ + *rqdepth = irdma_qp_round_up((rq_size << shift) + IRDMA_RQ_RSVD); + + if (*rqdepth < (IRDMA_QP_SW_MIN_WQSIZE << shift)) + *rqdepth = IRDMA_QP_SW_MIN_WQSIZE << shift; + else if (*rqdepth > uk_attrs->max_hw_rq_quanta) + return IRDMA_ERR_INVALID_SIZE; + + return 0; +} + +static struct irdma_qp_uk_ops iw_qp_uk_ops = { + .iw_inline_rdma_write = irdma_inline_rdma_write, + .iw_inline_send = irdma_inline_send, + .iw_mw_bind = irdma_mw_bind, + .iw_post_nop = irdma_nop, + .iw_post_receive = irdma_post_receive, + .iw_qp_post_wr = irdma_qp_post_wr, + .iw_qp_ring_push_db = irdma_qp_ring_push_db, + .iw_rdma_read = irdma_rdma_read, + .iw_rdma_write = irdma_rdma_write, + .iw_send = irdma_send, + .iw_stag_local_invalidate = irdma_stag_local_invalidate, +}; + +static struct irdma_wqe_uk_ops iw_wqe_uk_ops = { + .iw_copy_inline_data = irdma_copy_inline_data, + .iw_inline_data_size_to_quanta = irdma_inline_data_size_to_quanta, + .iw_set_fragment = irdma_set_fragment, + .iw_set_mw_bind_wqe = irdma_set_mw_bind_wqe, +}; + +static struct irdma_wqe_uk_ops iw_wqe_uk_ops_gen_1 = { + .iw_copy_inline_data = irdma_copy_inline_data_gen_1, + .iw_inline_data_size_to_quanta = irdma_inline_data_size_to_quanta_gen_1, + .iw_set_fragment = irdma_set_fragment_gen_1, + .iw_set_mw_bind_wqe = irdma_set_mw_bind_wqe_gen_1, +}; + +static struct irdma_cq_ops iw_cq_ops = { + .iw_cq_clean = irdma_clean_cq, + .iw_cq_poll_cmpl = irdma_cq_poll_cmpl, + .iw_cq_post_entries = irdma_cq_post_entries, + .iw_cq_request_notification = irdma_cq_request_notification, + .iw_cq_resize = irdma_cq_resize, + .iw_cq_set_resized_cnt = irdma_cq_set_resized_cnt, +}; + +static struct irdma_device_uk_ops iw_device_uk_ops = { + .iw_cq_uk_init = irdma_cq_uk_init, + .iw_qp_uk_init = irdma_qp_uk_init, +}; + +/** + * irdma_setup_connection_wqes - setup WQEs necessary to complete + * connection. + * @qp: hw qp (user and kernel) + * @info: qp initialization info + */ +static void irdma_setup_connection_wqes(struct irdma_qp_uk *qp, + struct irdma_qp_uk_init_info *info) +{ + u16 move_cnt = 1; + + if (info->abi_ver > 5 && + (qp->uk_attrs->feature_flags & IRDMA_FEATURE_RTS_AE)) + move_cnt = 3; + + IRDMA_RING_MOVE_HEAD_BY_COUNT_NOCHECK(qp->sq_ring, move_cnt); + IRDMA_RING_MOVE_TAIL_BY_COUNT(qp->sq_ring, move_cnt); + IRDMA_RING_MOVE_HEAD_BY_COUNT_NOCHECK(qp->initial_ring, move_cnt); +} + +/** + * irdma_qp_uk_init - initialize shared qp + * @qp: hw qp (user and kernel) + * @info: qp initialization info + * + * initializes the vars used in both user and kernel mode. + * size of the wqe depends on numbers of max. fragements + * allowed. Then size of wqe * the number of wqes should be the + * amount of memory allocated for sq and rq. + */ +enum irdma_status_code irdma_qp_uk_init(struct irdma_qp_uk *qp, + struct irdma_qp_uk_init_info *info) +{ + enum irdma_status_code ret_code = 0; + u32 sq_ring_size; + u8 sqshift, rqshift; + + qp->uk_attrs = info->uk_attrs; + if (info->max_sq_frag_cnt > qp->uk_attrs->max_hw_wq_frags || + info->max_rq_frag_cnt > qp->uk_attrs->max_hw_wq_frags) + return IRDMA_ERR_INVALID_FRAG_COUNT; + + irdma_get_wqe_shift(qp->uk_attrs, info->max_rq_frag_cnt, 0, &rqshift); + if (qp->uk_attrs->hw_rev == IRDMA_GEN_1) { + irdma_get_wqe_shift(qp->uk_attrs, info->max_sq_frag_cnt, + info->max_inline_data, &sqshift); + if (info->abi_ver > 4) + rqshift = IRDMA_MAX_RQ_WQE_SHIFT_GEN1; + } else { + irdma_get_wqe_shift(qp->uk_attrs, info->max_sq_frag_cnt + 1, + info->max_inline_data, &sqshift); + } + qp->qp_caps = info->qp_caps; + qp->sq_base = info->sq; + qp->rq_base = info->rq; + qp->shadow_area = info->shadow_area; + qp->sq_wrtrk_array = info->sq_wrtrk_array; + qp->rq_wrid_array = info->rq_wrid_array; + qp->wqe_alloc_db = info->wqe_alloc_db; + qp->qp_id = info->qp_id; + qp->sq_size = info->sq_size; + qp->push_mode = false; + qp->max_sq_frag_cnt = info->max_sq_frag_cnt; + sq_ring_size = qp->sq_size << sqshift; + IRDMA_RING_INIT(qp->sq_ring, sq_ring_size); + IRDMA_RING_INIT(qp->initial_ring, sq_ring_size); + if (info->first_sq_wq) { + irdma_setup_connection_wqes(qp, info); + qp->swqe_polarity = 1; + qp->first_sq_wq = true; + } else { + qp->swqe_polarity = 0; + } + qp->swqe_polarity_deferred = 1; + qp->rwqe_polarity = 0; + qp->rq_size = info->rq_size; + qp->max_rq_frag_cnt = info->max_rq_frag_cnt; + qp->max_inline_data = info->max_inline_data; + qp->rq_wqe_size = rqshift; + IRDMA_RING_INIT(qp->rq_ring, qp->rq_size); + qp->rq_wqe_size_multiplier = 1 << rqshift; + qp->qp_ops = iw_qp_uk_ops; + if (qp->uk_attrs->hw_rev == IRDMA_GEN_1) + qp->wqe_ops = iw_wqe_uk_ops_gen_1; + else + qp->wqe_ops = iw_wqe_uk_ops; + + return ret_code; +} + +/** + * irdma_cq_uk_init - initialize shared cq (user and kernel) + * @cq: hw cq + * @info: hw cq initialization info + */ +enum irdma_status_code irdma_cq_uk_init(struct irdma_cq_uk *cq, + struct irdma_cq_uk_init_info *info) +{ + cq->cq_base = info->cq_base; + cq->cq_id = info->cq_id; + cq->cq_size = info->cq_size; + cq->cqe_alloc_db = info->cqe_alloc_db; + cq->cq_ack_db = info->cq_ack_db; + cq->shadow_area = info->shadow_area; + cq->avoid_mem_cflct = info->avoid_mem_cflct; + IRDMA_RING_INIT(cq->cq_ring, cq->cq_size); + cq->polarity = 1; + cq->ops = iw_cq_ops; + + return 0; +} + +/** + * irdma_device_init_uk - setup routines for iwarp shared device + * @dev: iwarp shared (user and kernel) + */ +void irdma_device_init_uk(struct irdma_dev_uk *dev) +{ + dev->ops_uk = iw_device_uk_ops; +} + +/** + * irdma_clean_cq - clean cq entries + * @q: completion context + * @cq: cq to clean + */ +void irdma_clean_cq(void *q, struct irdma_cq_uk *cq) +{ + __le64 *cqe; + u64 qword3, comp_ctx; + u32 cq_head; + u8 polarity, temp; + + cq_head = cq->cq_ring.head; + temp = cq->polarity; + do { + if (cq->avoid_mem_cflct) + cqe = ((struct irdma_extended_cqe *)(cq->cq_base))[cq_head].buf; + else + cqe = cq->cq_base[cq_head].buf; + get_64bit_val(cqe, 24, &qword3); + polarity = (u8)RS_64(qword3, IRDMA_CQ_VALID); + + if (polarity != temp) + break; + + get_64bit_val(cqe, 8, &comp_ctx); + if ((void *)(unsigned long)comp_ctx == q) + set_64bit_val(cqe, 8, 0); + + cq_head = (cq_head + 1) % cq->cq_ring.size; + if (!cq_head) + temp ^= 1; + } while (true); +} + +/** + * irdma_nop - post a nop + * @qp: hw qp ptr + * @wr_id: work request id + * @signaled: signaled for completion + * @post_sq: ring doorbell + */ +enum irdma_status_code irdma_nop(struct irdma_qp_uk *qp, u64 wr_id, + bool signaled, bool post_sq) +{ + __le64 *wqe; + u64 hdr; + u32 wqe_idx; + struct irdma_post_sq_info info = {}; + + info.push_wqe = false; + info.wr_id = wr_id; + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, IRDMA_QP_WQE_MIN_QUANTA, + 0, &info); + if (!wqe) + return IRDMA_ERR_QP_TOOMANY_WRS_POSTED; + + irdma_clr_wqes(qp, wqe_idx); + + set_64bit_val(wqe, 0, 0); + set_64bit_val(wqe, 8, 0); + set_64bit_val(wqe, 16, 0); + + hdr = LS_64(IRDMAQP_OP_NOP, IRDMAQPSQ_OPCODE) | + LS_64(signaled, IRDMAQPSQ_SIGCOMPL) | + LS_64(qp->swqe_polarity, IRDMAQPSQ_VALID); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + if (post_sq) + irdma_qp_post_wr(qp); + + return 0; +} + +/** + * irdma_fragcnt_to_quanta_sq - calculate quanta based on fragment count for SQ + * @frag_cnt: number of fragments + * @quanta: quanta for frag_cnt + */ +enum irdma_status_code irdma_fragcnt_to_quanta_sq(u32 frag_cnt, u16 *quanta) +{ + switch (frag_cnt) { + case 0: + case 1: + *quanta = IRDMA_QP_WQE_MIN_QUANTA; + break; + case 2: + case 3: + *quanta = 2; + break; + case 4: + case 5: + *quanta = 3; + break; + case 6: + case 7: + *quanta = 4; + break; + case 8: + case 9: + *quanta = 5; + break; + case 10: + case 11: + *quanta = 6; + break; + case 12: + case 13: + *quanta = 7; + break; + case 14: + case 15: /* when immediate data is present */ + *quanta = 8; + break; + default: + return IRDMA_ERR_INVALID_FRAG_COUNT; + } + + return 0; +} + +/** + * irdma_fragcnt_to_wqesize_rq - calculate wqe size based on fragment count for RQ + * @frag_cnt: number of fragments + * @wqe_size: size in bytes given frag_cnt + */ +enum irdma_status_code irdma_fragcnt_to_wqesize_rq(u32 frag_cnt, u16 *wqe_size) +{ + switch (frag_cnt) { + case 0: + case 1: + *wqe_size = 32; + break; + case 2: + case 3: + *wqe_size = 64; + break; + case 4: + case 5: + case 6: + case 7: + *wqe_size = 128; + break; + case 8: + case 9: + case 10: + case 11: + case 12: + case 13: + case 14: + *wqe_size = 256; + break; + default: + return IRDMA_ERR_INVALID_FRAG_COUNT; + } + + return 0; +} diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h new file mode 100644 index 000000000000..aba2b0917aa8 --- /dev/null +++ b/drivers/infiniband/hw/irdma/user.h @@ -0,0 +1,448 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2019 Intel Corporation */ +#ifndef IRDMA_USER_H +#define IRDMA_USER_H + +#define irdma_handle void * +#define irdma_adapter_handle irdma_handle +#define irdma_qp_handle irdma_handle +#define irdma_cq_handle irdma_handle +#define irdma_pd_id irdma_handle +#define irdma_stag_handle irdma_handle +#define irdma_stag_index u32 +#define irdma_stag u32 +#define irdma_stag_key u8 +#define irdma_tagged_offset u64 +#define irdma_access_privileges u32 +#define irdma_physical_fragment u64 +#define irdma_address_list u64 * +#define irdma_sgl struct irdma_sge * + +#define IRDMA_MAX_MR_SIZE 0x7FFFFFFFL + +#define IRDMA_ACCESS_FLAGS_LOCALREAD 0x01 +#define IRDMA_ACCESS_FLAGS_LOCALWRITE 0x02 +#define IRDMA_ACCESS_FLAGS_REMOTEREAD_ONLY 0x04 +#define IRDMA_ACCESS_FLAGS_REMOTEREAD 0x05 +#define IRDMA_ACCESS_FLAGS_REMOTEWRITE_ONLY 0x08 +#define IRDMA_ACCESS_FLAGS_REMOTEWRITE 0x0a +#define IRDMA_ACCESS_FLAGS_BIND_WINDOW 0x10 +#define IRDMA_ACCESS_FLAGS_ALL 0x1f + +#define IRDMA_OP_TYPE_RDMA_WRITE 0x00 +#define IRDMA_OP_TYPE_RDMA_READ 0x01 +#define IRDMA_OP_TYPE_SEND 0x03 +#define IRDMA_OP_TYPE_SEND_INV 0x04 +#define IRDMA_OP_TYPE_SEND_SOL 0x05 +#define IRDMA_OP_TYPE_SEND_SOL_INV 0x06 +#define IRDMA_OP_TYPE_RDMA_WRITE_SOL 0x0d +#define IRDMA_OP_TYPE_BIND_MW 0x08 +#define IRDMA_OP_TYPE_FAST_REG_NSMR 0x09 +#define IRDMA_OP_TYPE_INV_STAG 0x0a +#define IRDMA_OP_TYPE_RDMA_READ_INV_STAG 0x0b +#define IRDMA_OP_TYPE_NOP 0x0c +#define IRDMA_OP_TYPE_REC 0x3e +#define IRDMA_OP_TYPE_REC_IMM 0x3f + +#define IRDMA_FLUSH_MAJOR_ERR 1 +#define IRDMA_LEN_MAJOR_ERR 2 + +enum irdma_device_caps_const { + IRDMA_WQE_SIZE = 4, + IRDMA_CQP_WQE_SIZE = 8, + IRDMA_CQE_SIZE = 4, + IRDMA_EXTENDED_CQE_SIZE = 8, + IRDMA_AEQE_SIZE = 2, + IRDMA_CEQE_SIZE = 1, + IRDMA_CQP_CTX_SIZE = 8, + IRDMA_SHADOW_AREA_SIZE = 8, + IRDMA_QUERY_FPM_BUF_SIZE = 176, + IRDMA_COMMIT_FPM_BUF_SIZE = 176, + IRDMA_GATHER_STATS_BUF_SIZE = 1024, + IRDMA_MIN_IW_QP_ID = 0, + IRDMA_MAX_IW_QP_ID = 262143, + IRDMA_MIN_CEQID = 0, + IRDMA_MAX_CEQID = 1023, + IRDMA_CEQ_MAX_COUNT = IRDMA_MAX_CEQID + 1, + IRDMA_MIN_CQID = 0, + IRDMA_MAX_CQID = 524287, + IRDMA_MIN_AEQ_ENTRIES = 1, + IRDMA_MAX_AEQ_ENTRIES = 524287, + IRDMA_MIN_CEQ_ENTRIES = 1, + IRDMA_MAX_CEQ_ENTRIES = 262143, + IRDMA_MIN_CQ_SIZE = 1, + IRDMA_MAX_CQ_SIZE = 1048575, + IRDMA_DB_ID_ZERO = 0, + IRDMA_MAX_WQ_FRAGMENT_COUNT = 13, + IRDMA_MAX_SGE_RD = 13, + IRDMA_MAX_OUTBOUND_MSG_SIZE = 2147483647, + IRDMA_MAX_INBOUND_MSG_SIZE = 2147483647, + IRDMA_MAX_PUSH_PAGE_COUNT = 4096, + IRDMA_MAX_PE_ENA_VF_COUNT = 32, + IRDMA_MAX_VF_FPM_ID = 47, + IRDMA_MAX_SQ_PAYLOAD_SIZE = 2145386496, + IRDMA_MAX_INLINE_DATA_SIZE = 96, + IRDMA_MAX_IRD_SIZE = 127, + IRDMA_MAX_ORD_SIZE = 255, + IRDMA_MAX_WQ_ENTRIES = 32768, + IRDMA_Q2_BUF_SIZE = 256, + IRDMA_QP_CTX_SIZE = 256, + IRDMA_MAX_PDS = 262144, +}; + +enum irdma_addressing_type { + IRDMA_ADDR_TYPE_ZERO_BASED = 0, + IRDMA_ADDR_TYPE_VA_BASED = 1, +}; + +enum irdma_cmpl_status { + IRDMA_COMPL_STATUS_SUCCESS = 0, + IRDMA_COMPL_STATUS_FLUSHED, + IRDMA_COMPL_STATUS_INVALID_WQE, + IRDMA_COMPL_STATUS_QP_CATASTROPHIC, + IRDMA_COMPL_STATUS_REMOTE_TERMINATION, + IRDMA_COMPL_STATUS_INVALID_STAG, + IRDMA_COMPL_STATUS_BASE_BOUND_VIOLATION, + IRDMA_COMPL_STATUS_ACCESS_VIOLATION, + IRDMA_COMPL_STATUS_INVALID_PD_ID, + IRDMA_COMPL_STATUS_WRAP_ERROR, + IRDMA_COMPL_STATUS_STAG_INVALID_PDID, + IRDMA_COMPL_STATUS_RDMA_READ_ZERO_ORD, + IRDMA_COMPL_STATUS_QP_NOT_PRIVLEDGED, + IRDMA_COMPL_STATUS_STAG_NOT_INVALID, + IRDMA_COMPL_STATUS_INVALID_PHYS_BUF_SIZE, + IRDMA_COMPL_STATUS_INVALID_PHYS_BUF_ENTRY, + IRDMA_COMPL_STATUS_INVALID_FBO, + IRDMA_COMPL_STATUS_INVALID_LEN, + IRDMA_COMPL_STATUS_INVALID_ACCESS, + IRDMA_COMPL_STATUS_PHYS_BUF_LIST_TOO_LONG, + IRDMA_COMPL_STATUS_INVALID_VIRT_ADDRESS, + IRDMA_COMPL_STATUS_INVALID_REGION, + IRDMA_COMPL_STATUS_INVALID_WINDOW, + IRDMA_COMPL_STATUS_INVALID_TOTAL_LEN, + IRDMA_COMPL_STATUS_UNKNOWN, +}; + +enum irdma_cmpl_notify { + IRDMA_CQ_COMPL_EVENT = 0, + IRDMA_CQ_COMPL_SOLICITED = 1, +}; + +enum irdma_qp_caps { + IRDMA_WRITE_WITH_IMM = 1, + IRDMA_SEND_WITH_IMM = 2, + IRDMA_ROCE = 4, +}; + +struct irdma_qp_uk; +struct irdma_cq_uk; +struct irdma_qp_uk_init_info; +struct irdma_cq_uk_init_info; + +struct irdma_sge { + irdma_tagged_offset tag_off; + u32 len; + irdma_stag stag; +}; + +struct irdma_ring { + u32 head; + u32 tail; + u32 size; +}; + +struct irdma_cqe { + __le64 buf[IRDMA_CQE_SIZE]; +}; + +struct irdma_extended_cqe { + __le64 buf[IRDMA_EXTENDED_CQE_SIZE]; +}; + +struct irdma_post_send { + irdma_sgl sg_list; + u32 num_sges; + u32 qkey; + u32 dest_qp; + u32 ah_id; +}; + +struct irdma_post_inline_send { + void *data; + u32 len; + u32 qkey; + u32 dest_qp; + u32 ah_id; +}; + +struct irdma_rdma_write { + irdma_sgl lo_sg_list; + u32 num_lo_sges; + struct irdma_sge rem_addr; +}; + +struct irdma_inline_rdma_write { + void *data; + u32 len; + struct irdma_sge rem_addr; +}; + +struct irdma_rdma_read { + irdma_sgl lo_sg_list; + u32 num_lo_sges; + struct irdma_sge rem_addr; +}; + +struct irdma_bind_window { + irdma_stag mr_stag; + u64 bind_len; + void *va; + enum irdma_addressing_type addressing_type; + bool ena_reads:1; + bool ena_writes:1; + irdma_stag mw_stag; + bool mem_window_type_1:1; +}; + +struct irdma_inv_local_stag { + irdma_stag target_stag; +}; + +struct irdma_post_sq_info { + u64 wr_id; + u8 op_type; + u8 l4len; + bool signaled:1; + bool read_fence:1; + bool local_fence:1; + bool inline_data:1; + bool imm_data_valid:1; + bool push_wqe:1; + bool report_rtt:1; + bool udp_hdr:1; + bool defer_flag:1; + u32 imm_data; + u32 stag_to_inv; + union { + struct irdma_post_send send; + struct irdma_rdma_write rdma_write; + struct irdma_rdma_read rdma_read; + struct irdma_bind_window bind_window; + struct irdma_inv_local_stag inv_local_stag; + struct irdma_inline_rdma_write inline_rdma_write; + struct irdma_post_inline_send inline_send; + } op; +}; + +struct irdma_post_rq_info { + u64 wr_id; + irdma_sgl sg_list; + u32 num_sges; +}; + +struct irdma_cq_poll_info { + u64 wr_id; + irdma_qp_handle qp_handle; + u32 bytes_xfered; + u32 tcp_seq_num_rtt; + u32 qp_id; + u32 ud_src_qpn; + u32 imm_data; + irdma_stag inv_stag; /* or L_R_Key */ + enum irdma_cmpl_status comp_status; + u16 major_err; + u16 minor_err; + u16 ud_vlan; + u8 ud_smac[6]; + u8 op_type; + bool stag_invalid_set:1; /* or L_R_Key set */ + bool push_dropped:1; + bool error:1; + bool solicited_event:1; + bool ipv4:1; + bool ud_vlan_valid:1; + bool ud_smac_valid:1; + bool imm_valid:1; +}; + +struct irdma_qp_uk_ops { + enum irdma_status_code (*iw_rdma_write)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); + enum irdma_status_code (*iw_inline_send)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); + enum irdma_status_code (*iw_mw_bind)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); + enum irdma_status_code (*iw_post_nop)(struct irdma_qp_uk *qp, u64 wr_id, + bool signaled, bool post_sq); + enum irdma_status_code (*iw_post_receive)(struct irdma_qp_uk *qp, + struct irdma_post_rq_info *info); + void (*iw_qp_post_wr)(struct irdma_qp_uk *qp); + void (*iw_qp_ring_push_db)(struct irdma_qp_uk *qp, u32 wqe_index); + enum irdma_status_code (*iw_rdma_read)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool inv_stag, bool post_sq); + enum irdma_status_code (*iw_inline_rdma_write)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); + enum irdma_status_code (*iw_send)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); + enum irdma_status_code (*iw_stag_local_invalidate)(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, + bool post_sq); +}; + +struct irdma_wqe_uk_ops { + void (*iw_copy_inline_data)(u8 *dest, u8 *src, u32 len, u8 polarity); + enum irdma_status_code (*iw_inline_data_size_to_quanta)(u32 data_size, + u16 *quanta, + u32 max_size); + void (*iw_set_fragment)(__le64 *wqe, u32 offset, struct irdma_sge *sge, + u8 valid); + void (*iw_set_mw_bind_wqe)(__le64 *wqe, + struct irdma_bind_window *op_info); +}; + +struct irdma_cq_ops { + void (*iw_cq_clean)(void *q, struct irdma_cq_uk *cq); + enum irdma_status_code (*iw_cq_poll_cmpl)(struct irdma_cq_uk *cq, + struct irdma_cq_poll_info *info); + enum irdma_status_code (*iw_cq_post_entries)(struct irdma_cq_uk *cq, + u8 count); + void (*iw_cq_request_notification)(struct irdma_cq_uk *cq, + enum irdma_cmpl_notify cq_notify); + void (*iw_cq_resize)(struct irdma_cq_uk *cq, void *cq_base, int size); + void (*iw_cq_set_resized_cnt)(struct irdma_cq_uk *qp, u16 cnt); +}; + +struct irdma_dev_uk; + +struct irdma_device_uk_ops { + enum irdma_status_code (*iw_cq_uk_init)(struct irdma_cq_uk *cq, + struct irdma_cq_uk_init_info *info); + enum irdma_status_code (*iw_qp_uk_init)(struct irdma_qp_uk *qp, + struct irdma_qp_uk_init_info *info); +}; + +struct irdma_dev_uk { + struct irdma_device_uk_ops ops_uk; +}; + +struct irdma_sq_uk_wr_trk_info { + u64 wrid; + u32 wr_len; + u16 quanta; + u8 reserved[2]; +}; + +struct irdma_qp_quanta { + __le64 elem[IRDMA_WQE_SIZE]; +}; + +struct irdma_qp_uk { + struct irdma_qp_quanta *sq_base; + struct irdma_qp_quanta *rq_base; + struct irdma_uk_attrs *uk_attrs; + u32 __iomem *wqe_alloc_db; + struct irdma_sq_uk_wr_trk_info *sq_wrtrk_array; + u64 *rq_wrid_array; + __le64 *shadow_area; + u32 *push_db; + __le64 *push_wqe; + struct irdma_ring sq_ring; + struct irdma_ring rq_ring; + struct irdma_ring initial_ring; + u32 qp_id; + u32 qp_caps; + u32 sq_size; + u32 rq_size; + u32 max_sq_frag_cnt; + u32 max_rq_frag_cnt; + u32 max_inline_data; + struct irdma_qp_uk_ops qp_ops; + struct irdma_wqe_uk_ops wqe_ops; + u8 swqe_polarity; + u8 swqe_polarity_deferred; + u8 rwqe_polarity; + u8 rq_wqe_size; + u8 rq_wqe_size_multiplier; + bool deferred_flag:1; + bool push_mode:1; /* whether the last post wqe was pushed */ + bool first_sq_wq:1; + bool sq_flush_complete:1; /* Indicates flush was seen and SQ was empty after the flush */ + bool rq_flush_complete:1; /* Indicates flush was seen and RQ was empty after the flush */ + void *back_qp; + void *lock; + u8 dbg_rq_flushed; +}; + +struct irdma_cq_uk { + struct irdma_cqe *cq_base; + u32 __iomem *cqe_alloc_db; + u32 __iomem *cq_ack_db; + __le64 *shadow_area; + u32 cq_id; + u32 cq_size; + struct irdma_ring cq_ring; + u8 polarity; + struct irdma_cq_ops ops; + bool avoid_mem_cflct; +}; + +struct irdma_qp_uk_init_info { + struct irdma_qp_quanta *sq; + struct irdma_qp_quanta *rq; + struct irdma_uk_attrs *uk_attrs; + u32 __iomem *wqe_alloc_db; + __le64 *shadow_area; + struct irdma_sq_uk_wr_trk_info *sq_wrtrk_array; + u64 *rq_wrid_array; + u32 qp_id; + u32 qp_caps; + u32 sq_size; + u32 rq_size; + u32 max_sq_frag_cnt; + u32 max_rq_frag_cnt; + u32 max_inline_data; + u8 first_sq_wq; + int abi_ver; +}; + +struct irdma_cq_uk_init_info { + u32 __iomem *cqe_alloc_db; + u32 __iomem *cq_ack_db; + struct irdma_cqe *cq_base; + __le64 *shadow_area; + u32 cq_size; + u32 cq_id; + bool avoid_mem_cflct; +}; + +void irdma_device_init_uk(struct irdma_dev_uk *dev); +void irdma_qp_post_wr(struct irdma_qp_uk *qp); +__le64 *irdma_qp_get_next_send_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx, + u16 quanta, u32 total_size, + struct irdma_post_sq_info *info); +__le64 *irdma_qp_get_next_recv_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx); +enum irdma_status_code irdma_cq_uk_init(struct irdma_cq_uk *cq, + struct irdma_cq_uk_init_info *info); +enum irdma_status_code irdma_qp_uk_init(struct irdma_qp_uk *qp, + struct irdma_qp_uk_init_info *info); +void irdma_clean_cq(void *q, struct irdma_cq_uk *cq); +enum irdma_status_code irdma_nop(struct irdma_qp_uk *qp, u64 wr_id, + bool signaled, bool post_sq); +enum irdma_status_code irdma_fragcnt_to_quanta_sq(u32 frag_cnt, u16 *quanta); +enum irdma_status_code irdma_fragcnt_to_wqesize_rq(u32 frag_cnt, u16 *wqe_size); +void irdma_get_wqe_shift(struct irdma_uk_attrs *uk_attrs, u32 sge, + u32 inline_data, u8 *shift); +enum irdma_status_code irdma_get_sqdepth(struct irdma_uk_attrs *uk_attrs, + u32 sq_size, u8 shift, u32 *wqdepth); +enum irdma_status_code irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, + u32 rq_size, u8 shift, u32 *wqdepth); +void irdma_qp_push_wqe(struct irdma_qp_uk *qp, __le64 *wqe, u16 quanta, + u32 wqe_idx, bool post_sq); +void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx); +#endif /* IRDMA_USER_H */ From patchwork Fri Apr 17 17:12:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACF50C38A2F for ; Fri, 17 Apr 2020 17:13:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8BFA82076D for ; Fri, 17 Apr 2020 17:13:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729554AbgDQRNh (ORCPT ); Fri, 17 Apr 2020 13:13:37 -0400 Received: from mga01.intel.com ([192.55.52.88]:30119 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728438AbgDQRNf (ORCPT ); Fri, 17 Apr 2020 13:13:35 -0400 IronPort-SDR: QD8kB1goyoW6MB3O6ujyRicJU9hu5bGz+D41qWIfRCF3PivpVjWbfw5zvlnMwXZBZxL40gy7I8 RhM2EFH2ytaA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:58 -0700 IronPort-SDR: M+exMgXlgjzuyTvZ1i6DtLlTSeq9p8bj5WMvBpQ5nuOXZsOuWDx0PvT/KXgqAaXb7gIKETeObc w3OEldG0S+pw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383746" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:58 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: "Michael J. Ruhl" , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 13/16] RDMA/irdma: Add dynamic tracing for CM Date: Fri, 17 Apr 2020 10:12:48 -0700 Message-Id: <20200417171251.1533371-14-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: "Michael J. Ruhl" Add dynamic tracing functionality to debug connection management issues. Signed-off-by: "Michael J. Ruhl" Signed-off-by: Shiraz Saleem --- drivers/infiniband/hw/irdma/trace.c | 112 ++++++ drivers/infiniband/hw/irdma/trace.h | 3 + drivers/infiniband/hw/irdma/trace_cm.h | 458 +++++++++++++++++++++++++ 3 files changed, 573 insertions(+) create mode 100644 drivers/infiniband/hw/irdma/trace.c create mode 100644 drivers/infiniband/hw/irdma/trace.h create mode 100644 drivers/infiniband/hw/irdma/trace_cm.h diff --git a/drivers/infiniband/hw/irdma/trace.c b/drivers/infiniband/hw/irdma/trace.c new file mode 100644 index 000000000000..b5133f4137e0 --- /dev/null +++ b/drivers/infiniband/hw/irdma/trace.c @@ -0,0 +1,112 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2019 Intel Corporation */ +#define CREATE_TRACE_POINTS +#include "trace.h" + +const char *print_ip_addr(struct trace_seq *p, u32 *addr, u16 port, bool ipv4) +{ + const char *ret = trace_seq_buffer_ptr(p); + + if (ipv4) { + __be32 myaddr = htonl(*addr); + + trace_seq_printf(p, "%pI4:%d", &myaddr, htons(port)); + } else { + trace_seq_printf(p, "%pI6:%d", addr, htons(port)); + } + trace_seq_putc(p, 0); + + return ret; +} + +const char *parse_iw_event_type(enum iw_cm_event_type iw_type) +{ + switch (iw_type) { + case IW_CM_EVENT_CONNECT_REQUEST: + return "IwRequest"; + case IW_CM_EVENT_CONNECT_REPLY: + return "IwReply"; + case IW_CM_EVENT_ESTABLISHED: + return "IwEstablished"; + case IW_CM_EVENT_DISCONNECT: + return "IwDisconnect"; + case IW_CM_EVENT_CLOSE: + return "IwClose"; + } + + return "Unknown"; +} + +const char *parse_cm_event_type(enum irdma_cm_event_type cm_type) +{ + switch (cm_type) { + case IRDMA_CM_EVENT_ESTABLISHED: + return "CmEstablished"; + case IRDMA_CM_EVENT_MPA_REQ: + return "CmMPA_REQ"; + case IRDMA_CM_EVENT_MPA_CONNECT: + return "CmMPA_CONNECT"; + case IRDMA_CM_EVENT_MPA_ACCEPT: + return "CmMPA_ACCEPT"; + case IRDMA_CM_EVENT_MPA_REJECT: + return "CmMPA_REJECT"; + case IRDMA_CM_EVENT_MPA_ESTABLISHED: + return "CmMPA_ESTABLISHED"; + case IRDMA_CM_EVENT_CONNECTED: + return "CmConnected"; + case IRDMA_CM_EVENT_RESET: + return "CmReset"; + case IRDMA_CM_EVENT_ABORTED: + return "CmAborted"; + case IRDMA_CM_EVENT_UNKNOWN: + return "none"; + } + return "Unknown"; +} + +const char *parse_cm_state(enum irdma_cm_node_state state) +{ + switch (state) { + case IRDMA_CM_STATE_UNKNOWN: + return "UNKNOWN"; + case IRDMA_CM_STATE_INITED: + return "INITED"; + case IRDMA_CM_STATE_LISTENING: + return "LISTENING"; + case IRDMA_CM_STATE_SYN_RCVD: + return "SYN_RCVD"; + case IRDMA_CM_STATE_SYN_SENT: + return "SYN_SENT"; + case IRDMA_CM_STATE_ONE_SIDE_ESTABLISHED: + return "ONE_SIDE_ESTABLISHED"; + case IRDMA_CM_STATE_ESTABLISHED: + return "ESTABLISHED"; + case IRDMA_CM_STATE_ACCEPTING: + return "ACCEPTING"; + case IRDMA_CM_STATE_MPAREQ_SENT: + return "MPAREQ_SENT"; + case IRDMA_CM_STATE_MPAREQ_RCVD: + return "MPAREQ_RCVD"; + case IRDMA_CM_STATE_MPAREJ_RCVD: + return "MPAREJ_RECVD"; + case IRDMA_CM_STATE_OFFLOADED: + return "OFFLOADED"; + case IRDMA_CM_STATE_FIN_WAIT1: + return "FIN_WAIT1"; + case IRDMA_CM_STATE_FIN_WAIT2: + return "FIN_WAIT2"; + case IRDMA_CM_STATE_CLOSE_WAIT: + return "CLOSE_WAIT"; + case IRDMA_CM_STATE_TIME_WAIT: + return "TIME_WAIT"; + case IRDMA_CM_STATE_LAST_ACK: + return "LAST_ACK"; + case IRDMA_CM_STATE_CLOSING: + return "CLOSING"; + case IRDMA_CM_STATE_LISTENER_DESTROYED: + return "LISTENER_DESTROYED"; + case IRDMA_CM_STATE_CLOSED: + return "CLOSED"; + } + return ("Bad state"); +} diff --git a/drivers/infiniband/hw/irdma/trace.h b/drivers/infiniband/hw/irdma/trace.h new file mode 100644 index 000000000000..702e4efb018d --- /dev/null +++ b/drivers/infiniband/hw/irdma/trace.h @@ -0,0 +1,3 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#include "trace_cm.h" diff --git a/drivers/infiniband/hw/irdma/trace_cm.h b/drivers/infiniband/hw/irdma/trace_cm.h new file mode 100644 index 000000000000..9ca6ee4efd77 --- /dev/null +++ b/drivers/infiniband/hw/irdma/trace_cm.h @@ -0,0 +1,458 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2019 Intel Corporation */ +#if !defined(__TRACE_CM_H) || defined(TRACE_HEADER_MULTI_READ) +#define __TRACE_CM_H + +#include +#include + +#include "main.h" + +const char *print_ip_addr(struct trace_seq *p, u32 *addr, u16 port, bool ivp4); +const char *parse_iw_event_type(enum iw_cm_event_type iw_type); +const char *parse_cm_event_type(enum irdma_cm_event_type cm_type); +const char *parse_cm_state(enum irdma_cm_node_state); +#define __print_ip_addr(addr, port, ipv4) print_ip_addr(p, addr, port, ipv4) + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM irdma_cm + +TRACE_EVENT(irdma_create_listen, + TP_PROTO(struct irdma_device *iwdev, struct irdma_cm_info *cm_info), + TP_ARGS(iwdev, cm_info), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __dynamic_array(u32, laddr, 4) + __field(u16, lport) + __field(bool, ipv4) + ), + TP_fast_assign(__entry->iwdev = iwdev; + __entry->lport = cm_info->loc_port; + __entry->ipv4 = cm_info->ipv4; + memcpy(__get_dynamic_array(laddr), + cm_info->loc_addr, 4); + ), + TP_printk("iwdev=%p loc: %s", + __entry->iwdev, + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4) + ) +); + +TRACE_EVENT(irdma_dec_refcnt_listen, + TP_PROTO(struct irdma_cm_listener *listener, void *caller), + TP_ARGS(listener, caller), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(u32, refcnt) + __dynamic_array(u32, laddr, 4) + __field(u16, lport) + __field(bool, ipv4) + __field(void *, caller) + ), + TP_fast_assign(__entry->iwdev = listener->iwdev; + __entry->lport = listener->loc_port; + __entry->ipv4 = listener->ipv4; + memcpy(__get_dynamic_array(laddr), + listener->loc_addr, 4); + ), + TP_printk("iwdev=%p caller=%pS loc: %s", + __entry->iwdev, + __entry->caller, + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4) + ) +); + +DECLARE_EVENT_CLASS(listener_template, + TP_PROTO(struct irdma_cm_listener *listener), + TP_ARGS(listener), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(u16, lport) + __field(u16, vlan_id) + __field(bool, ipv4) + __field(enum irdma_cm_listener_state, + state) + __dynamic_array(u32, laddr, 4) + ), + TP_fast_assign(__entry->iwdev = listener->iwdev; + __entry->lport = listener->loc_port; + __entry->vlan_id = listener->vlan_id; + __entry->ipv4 = listener->ipv4; + __entry->state = listener->listener_state; + memcpy(__get_dynamic_array(laddr), + listener->loc_addr, 4); + ), + TP_printk("iwdev=%p vlan=%d loc: %s", + __entry->iwdev, + __entry->vlan_id, + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4) + ) +); + +DEFINE_EVENT(listener_template, irdma_find_listener, + TP_PROTO(struct irdma_cm_listener *listener), + TP_ARGS(listener)); + +DEFINE_EVENT(listener_template, irdma_del_multiple_qhash, + TP_PROTO(struct irdma_cm_listener *listener), + TP_ARGS(listener)); + +TRACE_EVENT(irdma_negotiate_mpa_v2, + TP_PROTO(struct irdma_cm_node *cm_node), + TP_ARGS(cm_node), + TP_STRUCT__entry(__field(struct irdma_cm_node *, cm_node) + __field(u16, ord_size) + __field(u16, ird_size) + ), + TP_fast_assign(__entry->cm_node = cm_node; + __entry->ord_size = cm_node->ord_size; + __entry->ird_size = cm_node->ird_size; + ), + TP_printk("MPVA2 Negotiated cm_node=%p ORD:[%d], IRD:[%d]", + __entry->cm_node, + __entry->ord_size, + __entry->ird_size + ) +); + +DECLARE_EVENT_CLASS(tos_template, + TP_PROTO(struct irdma_device *iwdev, u8 tos, u8 user_pri), + TP_ARGS(iwdev, tos, user_pri), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(u8, tos) + __field(u8, user_pri) + ), + TP_fast_assign(__entry->iwdev = iwdev; + __entry->tos = tos; + __entry->user_pri = user_pri; + ), + TP_printk("iwdev=%p TOS:[%d] UP:[%d]", + __entry->iwdev, + __entry->tos, + __entry->user_pri + ) +); + +DEFINE_EVENT(tos_template, irdma_listener_tos, + TP_PROTO(struct irdma_device *iwdev, u8 tos, u8 user_pri), + TP_ARGS(iwdev, tos, user_pri)); + +DEFINE_EVENT(tos_template, irdma_dcb_tos, + TP_PROTO(struct irdma_device *iwdev, u8 tos, u8 user_pri), + TP_ARGS(iwdev, tos, user_pri)); + +DECLARE_EVENT_CLASS(qhash_template, + TP_PROTO(struct irdma_device *iwdev, + struct irdma_cm_listener *listener, + char *dev_addr), + TP_ARGS(iwdev, listener, dev_addr), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(u16, lport) + __field(u16, vlan_id) + __field(bool, ipv4) + __dynamic_array(u32, laddr, 4) + __dynamic_array(u32, mac, ETH_ALEN) + ), + TP_fast_assign(__entry->iwdev = iwdev; + __entry->lport = listener->loc_port; + __entry->vlan_id = listener->vlan_id; + __entry->ipv4 = listener->ipv4; + memcpy(__get_dynamic_array(laddr), + listener->loc_addr, 4); + ether_addr_copy(__get_dynamic_array(mac), + dev_addr); + ), + TP_printk("iwdev=%p vlan=%d MAC=%pM loc: %s", + __entry->iwdev, + __entry->vlan_id, + __get_dynamic_array(mac), + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4) + ) +); + +DEFINE_EVENT(qhash_template, irdma_add_mqh_6, + TP_PROTO(struct irdma_device *iwdev, + struct irdma_cm_listener *listener, char *dev_addr), + TP_ARGS(iwdev, listener, dev_addr)); + +DEFINE_EVENT(qhash_template, irdma_add_mqh_4, + TP_PROTO(struct irdma_device *iwdev, + struct irdma_cm_listener *listener, char *dev_addr), + TP_ARGS(iwdev, listener, dev_addr)); + +TRACE_EVENT(irdma_addr_resolve, + TP_PROTO(struct irdma_device *iwdev, char *dev_addr), + TP_ARGS(iwdev, dev_addr), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __dynamic_array(u8, mac, ETH_ALEN) + ), + TP_fast_assign(__entry->iwdev = iwdev; + ether_addr_copy(__get_dynamic_array(mac), dev_addr); + ), + TP_printk("iwdev=%p MAC=%pM", __entry->iwdev, + __get_dynamic_array(mac) + ) +); + +TRACE_EVENT(irdma_send_cm_event, + TP_PROTO(struct irdma_cm_node *cm_node, struct iw_cm_id *cm_id, + enum iw_cm_event_type type, int status, void *caller), + TP_ARGS(cm_node, cm_id, type, status, caller), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(struct irdma_cm_node *, cm_node) + __field(struct iw_cm_id *, cm_id) + __field(u32, refcount) + __field(u16, lport) + __field(u16, rport) + __field(enum irdma_cm_node_state, state) + __field(bool, ipv4) + __field(u16, vlan_id) + __field(int, accel) + __field(enum iw_cm_event_type, type) + __field(int, status) + __field(void *, caller) + __dynamic_array(u32, laddr, 4) + __dynamic_array(u32, raddr, 4) + ), + TP_fast_assign(__entry->iwdev = cm_node->iwdev; + __entry->cm_node = cm_node; + __entry->cm_id = cm_id; + __entry->refcount = refcount_read(&cm_node->refcnt); + __entry->state = cm_node->state; + __entry->lport = cm_node->loc_port; + __entry->rport = cm_node->rem_port; + __entry->ipv4 = cm_node->ipv4; + __entry->vlan_id = cm_node->vlan_id; + __entry->accel = cm_node->accelerated; + __entry->type = type; + __entry->status = status; + __entry->caller = caller; + memcpy(__get_dynamic_array(laddr), + cm_node->loc_addr, 4); + memcpy(__get_dynamic_array(raddr), + cm_node->rem_addr, 4); + ), + TP_printk("iwdev=%p caller=%pS cm_id=%p node=%p refcnt=%d vlan_id=%d accel=%d state=%s event_type=%s status=%d loc: %s rem: %s", + __entry->iwdev, + __entry->caller, + __entry->cm_id, + __entry->cm_node, + __entry->refcount, + __entry->vlan_id, + __entry->accel, + parse_cm_state(__entry->state), + parse_iw_event_type(__entry->type), + __entry->status, + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4), + __print_ip_addr(__get_dynamic_array(raddr), + __entry->rport, __entry->ipv4) + ) +); + +TRACE_EVENT(irdma_send_cm_event_no_node, + TP_PROTO(struct iw_cm_id *cm_id, enum iw_cm_event_type type, + int status, void *caller), + TP_ARGS(cm_id, type, status, caller), + TP_STRUCT__entry(__field(struct iw_cm_id *, cm_id) + __field(enum iw_cm_event_type, type) + __field(int, status) + __field(void *, caller) + ), + TP_fast_assign(__entry->cm_id = cm_id; + __entry->type = type; + __entry->status = status; + __entry->caller = caller; + ), + TP_printk("cm_id=%p caller=%pS event_type=%s status=%d", + __entry->cm_id, + __entry->caller, + parse_iw_event_type(__entry->type), + __entry->status + ) +); + +DECLARE_EVENT_CLASS(cm_node_template, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(struct irdma_cm_node *, cm_node) + __field(u32, refcount) + __field(u16, lport) + __field(u16, rport) + __field(enum irdma_cm_node_state, state) + __field(bool, ipv4) + __field(u16, vlan_id) + __field(int, accel) + __field(enum irdma_cm_event_type, type) + __field(void *, caller) + __dynamic_array(u32, laddr, 4) + __dynamic_array(u32, raddr, 4) + ), + TP_fast_assign(__entry->iwdev = cm_node->iwdev; + __entry->cm_node = cm_node; + __entry->refcount = refcount_read(&cm_node->refcnt); + __entry->state = cm_node->state; + __entry->lport = cm_node->loc_port; + __entry->rport = cm_node->rem_port; + __entry->ipv4 = cm_node->ipv4; + __entry->vlan_id = cm_node->vlan_id; + __entry->accel = cm_node->accelerated; + __entry->type = type; + __entry->caller = caller; + memcpy(__get_dynamic_array(laddr), + cm_node->loc_addr, 4); + memcpy(__get_dynamic_array(raddr), + cm_node->rem_addr, 4); + ), + TP_printk("iwdev=%p caller=%pS node=%p refcnt=%d vlan_id=%d accel=%d state=%s event_type=%s loc: %s rem: %s", + __entry->iwdev, + __entry->caller, + __entry->cm_node, + __entry->refcount, + __entry->vlan_id, + __entry->accel, + parse_cm_state(__entry->state), + parse_cm_event_type(__entry->type), + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4), + __print_ip_addr(__get_dynamic_array(raddr), + __entry->rport, __entry->ipv4) + ) +); + +DEFINE_EVENT(cm_node_template, irdma_create_event, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_accept, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_connect, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_reject, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_find_node, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_send_reset, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_rem_ref_cm_node, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +DEFINE_EVENT(cm_node_template, irdma_cm_event_handler, + TP_PROTO(struct irdma_cm_node *cm_node, + enum irdma_cm_event_type type, void *caller), + TP_ARGS(cm_node, type, caller)); + +TRACE_EVENT(open_err_template, + TP_PROTO(struct irdma_cm_node *cm_node, bool reset, void *caller), + TP_ARGS(cm_node, reset, caller), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(struct irdma_cm_node *, cm_node) + __field(enum irdma_cm_node_state, state) + __field(bool, reset) + __field(void *, caller) + ), + TP_fast_assign(__entry->iwdev = cm_node->iwdev; + __entry->cm_node = cm_node; + __entry->state = cm_node->state; + __entry->reset = reset; + __entry->caller = caller; + ), + TP_printk("iwdev=%p caller=%pS node%p reset=%d state=%s", + __entry->iwdev, + __entry->caller, + __entry->cm_node, + __entry->reset, + parse_cm_state(__entry->state) + ) +); + +DEFINE_EVENT(open_err_template, irdma_active_open_err, + TP_PROTO(struct irdma_cm_node *cm_node, bool reset, void *caller), + TP_ARGS(cm_node, reset, caller)); + +DEFINE_EVENT(open_err_template, irdma_passive_open_err, + TP_PROTO(struct irdma_cm_node *cm_node, bool reset, void *caller), + TP_ARGS(cm_node, reset, caller)); + +DECLARE_EVENT_CLASS(cm_node_ah_template, + TP_PROTO(struct irdma_cm_node *cm_node), + TP_ARGS(cm_node), + TP_STRUCT__entry(__field(struct irdma_device *, iwdev) + __field(struct irdma_cm_node *, cm_node) + __field(struct irdma_sc_ah *, ah) + __field(u32, refcount) + __field(u16, lport) + __field(u16, rport) + __field(enum irdma_cm_node_state, state) + __field(bool, ipv4) + __field(u16, vlan_id) + __field(int, accel) + __dynamic_array(u32, laddr, 4) + __dynamic_array(u32, raddr, 4) + ), + TP_fast_assign(__entry->iwdev = cm_node->iwdev; + __entry->cm_node = cm_node; + __entry->ah = cm_node->ah; + __entry->refcount = refcount_read(&cm_node->refcnt); + __entry->lport = cm_node->loc_port; + __entry->rport = cm_node->rem_port; + __entry->state = cm_node->state; + __entry->ipv4 = cm_node->ipv4; + __entry->vlan_id = cm_node->vlan_id; + __entry->accel = cm_node->accelerated; + memcpy(__get_dynamic_array(laddr), + cm_node->loc_addr, 4); + memcpy(__get_dynamic_array(raddr), + cm_node->rem_addr, 4); + ), + TP_printk("iwdev=%p node=%p ah=%p refcnt=%d vlan_id=%d accel=%d state=%s loc: %s rem: %s", + __entry->iwdev, + __entry->cm_node, + __entry->ah, + __entry->refcount, + __entry->vlan_id, + __entry->accel, + parse_cm_state(__entry->state), + __print_ip_addr(__get_dynamic_array(laddr), + __entry->lport, __entry->ipv4), + __print_ip_addr(__get_dynamic_array(raddr), + __entry->rport, __entry->ipv4) + ) +); + +DEFINE_EVENT(cm_node_ah_template, irdma_cm_free_ah, + TP_PROTO(struct irdma_cm_node *cm_node), + TP_ARGS(cm_node)); + +DEFINE_EVENT(cm_node_ah_template, irdma_create_ah, + TP_PROTO(struct irdma_cm_node *cm_node), + TP_ARGS(cm_node)); + +#endif /* __TRACE_CM_H */ + +#undef TRACE_INCLUDE_PATH +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_PATH . +#define TRACE_INCLUDE_FILE trace_cm +#include From patchwork Fri Apr 17 17:12:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 221089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0BF4C3A5A0 for ; Fri, 17 Apr 2020 17:13:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D51A120776 for ; Fri, 17 Apr 2020 17:13:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729632AbgDQRNl (ORCPT ); Fri, 17 Apr 2020 13:13:41 -0400 Received: from mga01.intel.com ([192.55.52.88]:30119 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728458AbgDQRNg (ORCPT ); Fri, 17 Apr 2020 13:13:36 -0400 IronPort-SDR: NB8Ttq3o3WEn/z4Nwxv7EBAQC27wTNKH32yT/tRl+mdBnbNEonqB09LLRYZfYHFuoXBHNQdiAQ 6LLhRr1CaJtQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Apr 2020 10:12:58 -0700 IronPort-SDR: jm+mQWCeYgDIvC7yKe8iQJKSBcZMK8yHBD1V1hfMbUeHY0RJ8TY/xhH/R0ADbfd/XRTZ4ijy0y P2A0TH5iKzKg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,395,1580803200"; d="scan'208";a="364383748" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga001.fm.intel.com with ESMTP; 17 Apr 2020 10:12:58 -0700 From: Jeff Kirsher To: gregkh@linuxfoundation.org, jgg@ziepe.ca Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Shiraz Saleem Subject: [RFC PATCH v5 14/16] RDMA/irdma: Add ABI definitions Date: Fri, 17 Apr 2020 10:12:49 -0700 Message-Id: <20200417171251.1533371-15-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> References: <20200417171251.1533371-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mustafa Ismail Add ABI definitions for irdma. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem --- include/uapi/rdma/irdma-abi.h | 140 ++++++++++++++++++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 include/uapi/rdma/irdma-abi.h diff --git a/include/uapi/rdma/irdma-abi.h b/include/uapi/rdma/irdma-abi.h new file mode 100644 index 000000000000..2eb253220161 --- /dev/null +++ b/include/uapi/rdma/irdma-abi.h @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ +/* + * Copyright (c) 2006 - 2019 Intel Corporation. All rights reserved. + * Copyright (c) 2005 Topspin Communications. All rights reserved. + * Copyright (c) 2005 Cisco Systems. All rights reserved. + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved. + */ + +#ifndef IRDMA_ABI_H +#define IRDMA_ABI_H + +#include + +/* irdma must support legacy GEN_1 i40iw kernel + * and user-space whose last ABI ver is 5 + */ +#define IRDMA_ABI_VER 6 + +enum irdma_memreg_type { + IW_MEMREG_TYPE_MEM = 0, + IW_MEMREG_TYPE_QP = 1, + IW_MEMREG_TYPE_CQ = 2, + IW_MEMREG_TYPE_RSVD = 3, + IW_MEMREG_TYPE_MW = 4, +}; + +struct irdma_alloc_ucontext_req { + __u32 rsvd32; + __u8 userspace_ver; + __u8 rsvd8[3]; +}; + +struct i40iw_alloc_ucontext_req { + __u32 rsvd32; + __u8 userspace_ver; + __u8 rsvd8[3]; +}; + +struct irdma_alloc_ucontext_resp { + __aligned_u64 feature_flags; + __aligned_u64 db_mmap_key; + __u32 max_hw_wq_frags; + __u32 max_hw_read_sges; + __u32 max_hw_inline; + __u32 max_hw_rq_quanta; + __u32 max_hw_wq_quanta; + __u32 min_hw_cq_size; + __u32 max_hw_cq_size; + __u32 rsvd1[7]; + __u16 max_hw_sq_chunk; + __u16 rsvd2[11]; + __u8 kernel_ver; + __u8 hw_rev; + __u8 rsvd3[6]; +}; + +struct i40iw_alloc_ucontext_resp { + __u32 max_pds; + __u32 max_qps; + __u32 wq_size; /* size of the WQs (SQ+RQ) in the mmaped area */ + __u8 kernel_ver; + __u8 rsvd[3]; +}; + +struct irdma_alloc_pd_resp { + __u32 pd_id; + __u8 rsvd[4]; +}; + +struct irdma_resize_cq_req { + __aligned_u64 user_cq_buffer; +}; + +struct irdma_create_cq_req { + __aligned_u64 user_cq_buf; + __aligned_u64 user_shadow_area; +}; + +struct irdma_create_qp_req { + __aligned_u64 user_wqe_bufs; + __aligned_u64 user_compl_ctx; +}; + +struct i40iw_create_qp_req { + __aligned_u64 user_wqe_bufs; + __aligned_u64 user_compl_ctx; +}; + +struct irdma_mem_reg_req { + __u16 reg_type; /* Memory, QP or CQ */ + __u16 cq_pages; + __u16 rq_pages; + __u16 sq_pages; +}; + +struct irdma_modify_qp_req { + __u8 sq_flush; + __u8 rq_flush; + __u8 rsvd[6]; +}; + +struct irdma_create_cq_resp { + __u32 cq_id; + __u32 cq_size; +}; + +struct irdma_create_qp_resp { + __u32 qp_id; + __u32 actual_sq_size; + __u32 actual_rq_size; + __u32 irdma_drv_opt; + __u32 qp_caps; + __u16 rsvd1; + __u8 lsmm; + __u8 rsvd2; +}; + +struct i40iw_create_qp_resp { + __u32 qp_id; + __u32 actual_sq_size; + __u32 actual_rq_size; + __u32 i40iw_drv_opt; + __u16 push_idx; + __u8 lsmm; + __u8 rsvd; +}; + +struct irdma_modify_qp_resp { + __aligned_u64 push_wqe_mmap_key; + __aligned_u64 push_db_mmap_key; + __u16 push_offset; + __u8 push_valid; + __u8 rsvd[5]; +}; + +struct irdma_create_ah_resp { + __u32 ah_id; + __u8 rsvd[4]; +}; +#endif /* IRDMA_ABI_H */