From patchwork Mon Oct 12 20:38:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2D4FC43457 for ; Mon, 12 Oct 2020 20:39:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A8007208D5 for ; Mon, 12 Oct 2020 20:39:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728929AbgJLUjF (ORCPT ); Mon, 12 Oct 2020 16:39:05 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjE (ORCPT ); Mon, 12 Oct 2020 16:39:04 -0400 IronPort-SDR: xZJN1uUsvRo+xpT6hnf+WobStR3o4CEmJbdyrQ0NANtXJM9kzBNXuRHXwVY9WHKThQnw0cOMoD i+cQuqqechEA== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913055" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913055" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:03 -0700 IronPort-SDR: YlNOI/0aAUqvjMnlbHBIF87wlu6mOQbPs+EEZ8vZVuL1dc4MBmXLXWfdj3MYJFfz4FTdPW0mlx mYkKcwEZRe2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328111" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:01 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Marco Chiappero , Mateusz Polrola , Giovanni Cabiddu , Andy Shevchenko , Indrasena Reddy Gali Subject: [PATCH 01/31] crypto: qat - update IV in software Date: Mon, 12 Oct 2020 21:38:17 +0100 Message-Id: <20201012203847.340030-2-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Marco Chiappero Do IV update calculations in software for AES-CBC and AES-CTR. This allows to embed the IV on the request descriptor and removes the allocation of the IV buffer in the data path. In addition, this change allows the support of QAT devices that are not capable of updating the IV buffer when performing an AES-CBC or AES-CTR operation. Signed-off-by: Marco Chiappero Co-developed-by: Mateusz Polrola Signed-off-by: Mateusz Polrola Co-developed-by: Giovanni Cabiddu Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko Tested-by: Indrasena Reddy Gali --- drivers/crypto/qat/qat_common/qat_algs.c | 136 ++++++++++++--------- drivers/crypto/qat/qat_common/qat_crypto.h | 11 +- 2 files changed, 89 insertions(+), 58 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index d552dbcfe0a0..a38afc61f6d2 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include "adf_accel_devices.h" @@ -90,6 +91,7 @@ struct qat_alg_skcipher_ctx { struct qat_crypto_instance *inst; struct crypto_skcipher *ftfm; bool fallback; + int mode; }; static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg) @@ -214,24 +216,7 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, return 0; } -static void qat_alg_init_hdr_iv_updt(struct icp_qat_fw_comn_req_hdr *header) -{ - ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags, - ICP_QAT_FW_CIPH_IV_64BIT_PTR); - ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_UPDATE_STATE); -} - -static void qat_alg_init_hdr_no_iv_updt(struct icp_qat_fw_comn_req_hdr *header) -{ - ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags, - ICP_QAT_FW_CIPH_IV_16BYTE_DATA); - ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_UPDATE_STATE); -} - -static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header, - int aead) +static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header) { header->hdr_flags = ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET); @@ -241,12 +226,12 @@ static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header, QAT_COMN_PTR_TYPE_SGL); ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags, ICP_QAT_FW_LA_PARTIAL_NONE); - if (aead) - qat_alg_init_hdr_no_iv_updt(header); - else - qat_alg_init_hdr_iv_updt(header); + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags, + ICP_QAT_FW_CIPH_IV_16BYTE_DATA); ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags, ICP_QAT_FW_LA_NO_PROTO); + ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_UPDATE_STATE); } static int qat_alg_aead_init_enc_session(struct crypto_aead *aead_tfm, @@ -281,7 +266,7 @@ static int qat_alg_aead_init_enc_session(struct crypto_aead *aead_tfm, return -EFAULT; /* Request setup */ - qat_alg_init_common_hdr(header, 1); + qat_alg_init_common_hdr(header); header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_HASH; ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, ICP_QAT_FW_LA_DIGEST_IN_BUFFER); @@ -368,7 +353,7 @@ static int qat_alg_aead_init_dec_session(struct crypto_aead *aead_tfm, return -EFAULT; /* Request setup */ - qat_alg_init_common_hdr(header, 1); + qat_alg_init_common_hdr(header); header->service_cmd_id = ICP_QAT_FW_LA_CMD_HASH_CIPHER; ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, ICP_QAT_FW_LA_DIGEST_IN_BUFFER); @@ -432,7 +417,7 @@ static void qat_alg_skcipher_init_com(struct qat_alg_skcipher_ctx *ctx, struct icp_qat_fw_cipher_cd_ctrl_hdr *cd_ctrl = (void *)&req->cd_ctrl; memcpy(cd->aes.key, key, keylen); - qat_alg_init_common_hdr(header, 0); + qat_alg_init_common_hdr(header); header->service_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER; cd_pars->u.s.content_desc_params_sz = sizeof(struct icp_qat_hw_cipher_algo_blk) >> 3; @@ -787,6 +772,61 @@ static void qat_aead_alg_callback(struct icp_qat_fw_la_resp *qat_resp, areq->base.complete(&areq->base, res); } +static void qat_alg_update_iv_ctr_mode(struct qat_crypto_request *qat_req) +{ + struct skcipher_request *sreq = qat_req->skcipher_req; + u64 iv_lo_prev; + u64 iv_lo; + u64 iv_hi; + + memcpy(qat_req->iv, sreq->iv, AES_BLOCK_SIZE); + + iv_lo = be64_to_cpu(qat_req->iv_lo); + iv_hi = be64_to_cpu(qat_req->iv_hi); + + iv_lo_prev = iv_lo; + iv_lo += DIV_ROUND_UP(sreq->cryptlen, AES_BLOCK_SIZE); + if (iv_lo < iv_lo_prev) + iv_hi++; + + qat_req->iv_lo = cpu_to_be64(iv_lo); + qat_req->iv_hi = cpu_to_be64(iv_hi); +} + +static void qat_alg_update_iv_cbc_mode(struct qat_crypto_request *qat_req) +{ + struct skcipher_request *sreq = qat_req->skcipher_req; + int offset = sreq->cryptlen - AES_BLOCK_SIZE; + struct scatterlist *sgl; + + if (qat_req->encryption) + sgl = sreq->dst; + else + sgl = sreq->src; + + scatterwalk_map_and_copy(qat_req->iv, sgl, offset, AES_BLOCK_SIZE, 0); +} + +static void qat_alg_update_iv(struct qat_crypto_request *qat_req) +{ + struct qat_alg_skcipher_ctx *ctx = qat_req->skcipher_ctx; + struct device *dev = &GET_DEV(ctx->inst->accel_dev); + + switch (ctx->mode) { + case ICP_QAT_HW_CIPHER_CTR_MODE: + qat_alg_update_iv_ctr_mode(qat_req); + break; + case ICP_QAT_HW_CIPHER_CBC_MODE: + qat_alg_update_iv_cbc_mode(qat_req); + break; + case ICP_QAT_HW_CIPHER_XTS_MODE: + break; + default: + dev_warn(dev, "Unsupported IV update for cipher mode %d\n", + ctx->mode); + } +} + static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, struct qat_crypto_request *qat_req) { @@ -794,16 +834,16 @@ static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, struct qat_crypto_instance *inst = ctx->inst; struct skcipher_request *sreq = qat_req->skcipher_req; u8 stat_filed = qat_resp->comn_resp.comn_status; - struct device *dev = &GET_DEV(ctx->inst->accel_dev); int res = 0, qat_res = ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(stat_filed); qat_alg_free_bufl(inst, qat_req); if (unlikely(qat_res != ICP_QAT_FW_COMN_STATUS_FLAG_OK)) res = -EINVAL; + if (qat_req->encryption) + qat_alg_update_iv(qat_req); + memcpy(sreq->iv, qat_req->iv, AES_BLOCK_SIZE); - dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, - qat_req->iv_paddr); sreq->base.complete(&sreq->base, res); } @@ -981,6 +1021,8 @@ static int qat_alg_skcipher_setkey(struct crypto_skcipher *tfm, { struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); + ctx->mode = mode; + if (ctx->enc_cd) return qat_alg_skcipher_rekey(ctx, key, keylen, mode); else @@ -1035,23 +1077,14 @@ static int qat_alg_skcipher_encrypt(struct skcipher_request *req) struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; - struct device *dev = &GET_DEV(ctx->inst->accel_dev); int ret, ctr = 0; if (req->cryptlen == 0) return 0; - qat_req->iv = dma_alloc_coherent(dev, AES_BLOCK_SIZE, - &qat_req->iv_paddr, GFP_ATOMIC); - if (!qat_req->iv) - return -ENOMEM; - ret = qat_alg_sgl_to_bufl(ctx->inst, req->src, req->dst, qat_req); - if (unlikely(ret)) { - dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, - qat_req->iv_paddr); + if (unlikely(ret)) return ret; - } msg = &qat_req->req; *msg = ctx->enc_fw_req; @@ -1061,19 +1094,18 @@ static int qat_alg_skcipher_encrypt(struct skcipher_request *req) qat_req->req.comn_mid.opaque_data = (u64)(__force long)qat_req; qat_req->req.comn_mid.src_data_addr = qat_req->buf.blp; qat_req->req.comn_mid.dest_data_addr = qat_req->buf.bloutp; + qat_req->encryption = true; cipher_param = (void *)&qat_req->req.serv_specif_rqpars; cipher_param->cipher_length = req->cryptlen; cipher_param->cipher_offset = 0; - cipher_param->u.s.cipher_IV_ptr = qat_req->iv_paddr; - memcpy(qat_req->iv, req->iv, AES_BLOCK_SIZE); + memcpy(cipher_param->u.cipher_IV_array, req->iv, AES_BLOCK_SIZE); + do { ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); } while (ret == -EAGAIN && ctr++ < 10); if (ret == -EAGAIN) { qat_alg_free_bufl(ctx->inst, qat_req); - dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, - qat_req->iv_paddr); return -EBUSY; } return -EINPROGRESS; @@ -1113,23 +1145,14 @@ static int qat_alg_skcipher_decrypt(struct skcipher_request *req) struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; - struct device *dev = &GET_DEV(ctx->inst->accel_dev); int ret, ctr = 0; if (req->cryptlen == 0) return 0; - qat_req->iv = dma_alloc_coherent(dev, AES_BLOCK_SIZE, - &qat_req->iv_paddr, GFP_ATOMIC); - if (!qat_req->iv) - return -ENOMEM; - ret = qat_alg_sgl_to_bufl(ctx->inst, req->src, req->dst, qat_req); - if (unlikely(ret)) { - dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, - qat_req->iv_paddr); + if (unlikely(ret)) return ret; - } msg = &qat_req->req; *msg = ctx->dec_fw_req; @@ -1139,19 +1162,20 @@ static int qat_alg_skcipher_decrypt(struct skcipher_request *req) qat_req->req.comn_mid.opaque_data = (u64)(__force long)qat_req; qat_req->req.comn_mid.src_data_addr = qat_req->buf.blp; qat_req->req.comn_mid.dest_data_addr = qat_req->buf.bloutp; + qat_req->encryption = false; cipher_param = (void *)&qat_req->req.serv_specif_rqpars; cipher_param->cipher_length = req->cryptlen; cipher_param->cipher_offset = 0; - cipher_param->u.s.cipher_IV_ptr = qat_req->iv_paddr; - memcpy(qat_req->iv, req->iv, AES_BLOCK_SIZE); + memcpy(cipher_param->u.cipher_IV_array, req->iv, AES_BLOCK_SIZE); + + qat_alg_update_iv(qat_req); + do { ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); } while (ret == -EAGAIN && ctr++ < 10); if (ret == -EAGAIN) { qat_alg_free_bufl(ctx->inst, qat_req); - dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, - qat_req->iv_paddr); return -EBUSY; } return -EINPROGRESS; diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index 12682d1e9f5f..8d11e94cbf08 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -3,6 +3,7 @@ #ifndef _QAT_CRYPTO_INSTANCE_H_ #define _QAT_CRYPTO_INSTANCE_H_ +#include #include #include #include "adf_accel_devices.h" @@ -44,8 +45,14 @@ struct qat_crypto_request { struct qat_crypto_request_buffs buf; void (*cb)(struct icp_qat_fw_la_resp *resp, struct qat_crypto_request *req); - void *iv; - dma_addr_t iv_paddr; + union { + struct { + __be64 iv_hi; + __be64 iv_lo; + }; + u8 iv[AES_BLOCK_SIZE]; + }; + bool encryption; }; #endif From patchwork Mon Oct 12 20:38:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E915AC433DF for ; Mon, 12 Oct 2020 20:39:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A942020BED for ; Mon, 12 Oct 2020 20:39:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728952AbgJLUjH (ORCPT ); Mon, 12 Oct 2020 16:39:07 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjH (ORCPT ); Mon, 12 Oct 2020 16:39:07 -0400 IronPort-SDR: L/j4Kr5/F5MhGYdmjGTpZ5/tcwKiuZR5dBMyRNSd22rPu50oUu64W88QvoJgLcDm2j/JIR3rDv wCLg9iAUiBHg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913058" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913058" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:06 -0700 IronPort-SDR: tR5tf7wvYUrw21xjJrt4QLWZNkIpe8PZ0hldRc9O5qYeZDHyq+KZtizzmHHTKMEgff/ebxO+Q7 YeVJPcQqfjBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328117" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:04 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 02/31] crypto: qat - mask device capabilities with soft straps Date: Mon, 12 Oct 2020 21:38:18 +0100 Message-Id: <20201012203847.340030-3-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Enable acceleration engines (AEs) and accelerators based on soft straps and fuses. When looping with a number of AEs or accelerators, ignore the ones that are disabled. This patch is based on earlier work done by Conor McLoughlin. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 34 +++++++++++++++---- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h | 1 + drivers/crypto/qat/qat_c3xxx/adf_drv.c | 6 ++-- .../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 4 +-- drivers/crypto/qat/qat_c3xxxvf/adf_drv.c | 4 +-- .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 34 +++++++++++++++---- .../crypto/qat/qat_c62x/adf_c62x_hw_data.h | 1 + drivers/crypto/qat/qat_c62x/adf_drv.c | 6 ++-- .../qat/qat_c62xvf/adf_c62xvf_hw_data.c | 4 +-- drivers/crypto/qat/qat_c62xvf/adf_drv.c | 4 +-- .../crypto/qat/qat_common/adf_accel_devices.h | 5 +-- drivers/crypto/qat/qat_common/qat_hal.c | 27 ++++++++------- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 20 +++++++---- drivers/crypto/qat/qat_dh895xcc/adf_drv.c | 4 +-- .../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 4 +-- drivers/crypto/qat/qat_dh895xccvf/adf_drv.c | 4 +-- 16 files changed, 109 insertions(+), 53 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index aee494d3da52..4b2f5aa83391 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -17,15 +17,33 @@ static struct adf_hw_device_class c3xxx_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { - return (~fuse) >> ADF_C3XXX_ACCELERATORS_REG_OFFSET & - ADF_C3XXX_ACCELERATORS_MASK; + u32 straps = self->straps; + u32 fuses = self->fuses; + u32 accel; + + accel = ~(fuses | straps) >> ADF_C3XXX_ACCELERATORS_REG_OFFSET; + accel &= ADF_C3XXX_ACCELERATORS_MASK; + + return accel; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { - return (~fuse) & ADF_C3XXX_ACCELENGINES_MASK; + u32 straps = self->straps; + u32 fuses = self->fuses; + unsigned long disabled; + u32 ae_disable; + int accel; + + /* If an accel is disabled, then disable the corresponding two AEs */ + disabled = ~get_accel_mask(self) & ADF_C3XXX_ACCELERATORS_MASK; + ae_disable = BIT(1) | BIT(0); + for_each_set_bit(accel, &disabled, ADF_C3XXX_MAX_ACCELERATORS) + straps |= ae_disable << (accel << 1); + + return ~(fuses | straps) & ADF_C3XXX_ACCELENGINES_MASK; } static u32 get_num_accels(struct adf_hw_device_data *self) @@ -109,11 +127,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_device = accel_dev->hw_device; struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C3XXX_PMISC_BAR]; + unsigned long accel_mask = hw_device->accel_mask; + unsigned long ae_mask = hw_device->ae_mask; void __iomem *csr = misc_bar->virt_addr; unsigned int val, i; /* Enable Accel Engine error detection & correction */ - for (i = 0; i < hw_device->get_num_aes(hw_device); i++) { + for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) { val = ADF_CSR_RD(csr, ADF_C3XXX_AE_CTX_ENABLES(i)); val |= ADF_C3XXX_ENABLE_AE_ECC_ERR; ADF_CSR_WR(csr, ADF_C3XXX_AE_CTX_ENABLES(i), val); @@ -123,7 +143,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) } /* Enable shared memory error detection & correction */ - for (i = 0; i < hw_device->get_num_accels(hw_device); i++) { + for_each_set_bit(i, &accel_mask, ADF_C3XXX_MAX_ACCELERATORS) { val = ADF_CSR_RD(csr, ADF_C3XXX_UERRSSMSH(i)); val |= ADF_C3XXX_ERRSSMSH_EN; ADF_CSR_WR(csr, ADF_C3XXX_UERRSSMSH(i), val); diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h index 8b5dd2c94ebf..94097816f68a 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h @@ -18,6 +18,7 @@ #define ADF_C3XXX_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) #define ADF_C3XXX_SMIA0_MASK 0xFFFF #define ADF_C3XXX_SMIA1_MASK 0x1 +#define ADF_C3XXX_SOFTSTRAP_CSR_OFFSET 0x2EC /* Error detection and correction */ #define ADF_C3XXX_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818) #define ADF_C3XXX_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c index ed0e8e33fe4b..da6e88026988 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c @@ -126,10 +126,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_read_config_byte(pdev, PCI_REVISION_ID, &accel_pci_dev->revid); pci_read_config_dword(pdev, ADF_DEVICE_FUSECTL_OFFSET, &hw_data->fuses); + pci_read_config_dword(pdev, ADF_C3XXX_SOFTSTRAP_CSR_OFFSET, + &hw_data->straps); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* If the device has no acceleration engines then ignore it. */ if (!hw_data->accel_mask || !hw_data->ae_mask || diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c index d2fedbd7113c..cdf8c500ef2a 100644 --- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c @@ -11,12 +11,12 @@ static struct adf_hw_device_class c3xxxiov_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { return ADF_C3XXXIOV_ACCELERATORS_MASK; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { return ADF_C3XXXIOV_ACCELENGINES_MASK; } diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c index 456979b136a2..1d1532e8fb6d 100644 --- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c +++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c @@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adf_init_hw_data_c3xxxiov(accel_dev->hw_device); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* Create dev top level debugfs entry */ diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index 844ad5ed33fc..c0b5751e9682 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -22,15 +22,33 @@ static struct adf_hw_device_class c62x_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { - return (~fuse) >> ADF_C62X_ACCELERATORS_REG_OFFSET & - ADF_C62X_ACCELERATORS_MASK; + u32 straps = self->straps; + u32 fuses = self->fuses; + u32 accel; + + accel = ~(fuses | straps) >> ADF_C62X_ACCELERATORS_REG_OFFSET; + accel &= ADF_C62X_ACCELERATORS_MASK; + + return accel; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { - return (~fuse) & ADF_C62X_ACCELENGINES_MASK; + u32 straps = self->straps; + u32 fuses = self->fuses; + unsigned long disabled; + u32 ae_disable; + int accel; + + /* If an accel is disabled, then disable the corresponding two AEs */ + disabled = ~get_accel_mask(self) & ADF_C62X_ACCELERATORS_MASK; + ae_disable = BIT(1) | BIT(0); + for_each_set_bit(accel, &disabled, ADF_C62X_MAX_ACCELERATORS) + straps |= ae_disable << (accel << 1); + + return ~(fuses | straps) & ADF_C62X_ACCELENGINES_MASK; } static u32 get_num_accels(struct adf_hw_device_data *self) @@ -119,11 +137,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_device = accel_dev->hw_device; struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_C62X_PMISC_BAR]; + unsigned long accel_mask = hw_device->accel_mask; + unsigned long ae_mask = hw_device->ae_mask; void __iomem *csr = misc_bar->virt_addr; unsigned int val, i; /* Enable Accel Engine error detection & correction */ - for (i = 0; i < hw_device->get_num_aes(hw_device); i++) { + for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) { val = ADF_CSR_RD(csr, ADF_C62X_AE_CTX_ENABLES(i)); val |= ADF_C62X_ENABLE_AE_ECC_ERR; ADF_CSR_WR(csr, ADF_C62X_AE_CTX_ENABLES(i), val); @@ -133,7 +153,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) } /* Enable shared memory error detection & correction */ - for (i = 0; i < hw_device->get_num_accels(hw_device); i++) { + for_each_set_bit(i, &accel_mask, ADF_C62X_MAX_ACCELERATORS) { val = ADF_CSR_RD(csr, ADF_C62X_UERRSSMSH(i)); val |= ADF_C62X_ERRSSMSH_EN; ADF_CSR_WR(csr, ADF_C62X_UERRSSMSH(i), val); diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h index 88504d2bf30d..a2e2961a2102 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h @@ -19,6 +19,7 @@ #define ADF_C62X_SMIAPF1_MASK_OFFSET (0x3A000 + 0x30) #define ADF_C62X_SMIA0_MASK 0xFFFF #define ADF_C62X_SMIA1_MASK 0x1 +#define ADF_C62X_SOFTSTRAP_CSR_OFFSET 0x2EC /* Error detection and correction */ #define ADF_C62X_AE_CTX_ENABLES(i) (i * 0x1000 + 0x20818) #define ADF_C62X_AE_MISC_CONTROL(i) (i * 0x1000 + 0x20960) diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c index d8e7c9c25590..3da697a566ad 100644 --- a/drivers/crypto/qat/qat_c62x/adf_drv.c +++ b/drivers/crypto/qat/qat_c62x/adf_drv.c @@ -126,10 +126,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_read_config_byte(pdev, PCI_REVISION_ID, &accel_pci_dev->revid); pci_read_config_dword(pdev, ADF_DEVICE_FUSECTL_OFFSET, &hw_data->fuses); + pci_read_config_dword(pdev, ADF_C62X_SOFTSTRAP_CSR_OFFSET, + &hw_data->straps); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* If the device has no acceleration engines then ignore it. */ if (!hw_data->accel_mask || !hw_data->ae_mask || diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c index 29fd3f1091ab..a2543f75e81f 100644 --- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c +++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c @@ -11,12 +11,12 @@ static struct adf_hw_device_class c62xiov_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { return ADF_C62XIOV_ACCELERATORS_MASK; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { return ADF_C62XIOV_ACCELENGINES_MASK; } diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c index b9810f79eb84..04742a6d91ca 100644 --- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c +++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c @@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adf_init_hw_data_c62xiov(accel_dev->hw_device); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* Create dev top level debugfs entry */ diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 06952ece53d9..411a505e1f59 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -104,8 +104,8 @@ struct adf_etr_ring_data; struct adf_hw_device_data { struct adf_hw_device_class *dev_class; - u32 (*get_accel_mask)(u32 fuse); - u32 (*get_ae_mask)(u32 fuse); + u32 (*get_accel_mask)(struct adf_hw_device_data *self); + u32 (*get_ae_mask)(struct adf_hw_device_data *self); u32 (*get_sram_bar_id)(struct adf_hw_device_data *self); u32 (*get_misc_bar_id)(struct adf_hw_device_data *self); u32 (*get_etr_bar_id)(struct adf_hw_device_data *self); @@ -131,6 +131,7 @@ struct adf_hw_device_data { const char *fw_name; const char *fw_mmp_name; u32 fuses; + u32 straps; u32 accel_capabilities_mask; u32 instance_id; u16 accel_mask; diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c index 6b9d47682d04..bc07199459e7 100644 --- a/drivers/crypto/qat/qat_common/qat_hal.c +++ b/drivers/crypto/qat/qat_common/qat_hal.c @@ -346,11 +346,12 @@ static void qat_hal_put_wakeup_event(struct icp_qat_fw_loader_handle *handle, static int qat_hal_check_ae_alive(struct icp_qat_fw_loader_handle *handle) { + unsigned long ae_mask = handle->hal_handle->ae_mask; unsigned int base_cnt, cur_cnt; unsigned char ae; int times = MAX_RETRY_TIMES; - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { base_cnt = qat_hal_rd_ae_csr(handle, ae, PROFILE_COUNT); base_cnt &= 0xffff; @@ -384,6 +385,7 @@ int qat_hal_check_ae_active(struct icp_qat_fw_loader_handle *handle, static void qat_hal_reset_timestamp(struct icp_qat_fw_loader_handle *handle) { + unsigned long ae_mask = handle->hal_handle->ae_mask; unsigned int misc_ctl; unsigned char ae; @@ -393,7 +395,7 @@ static void qat_hal_reset_timestamp(struct icp_qat_fw_loader_handle *handle) SET_GLB_CSR(handle, MISC_CONTROL, misc_ctl & (~MC_TIMESTAMP_ENABLE)); - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_LOW, 0); qat_hal_wr_ae_csr(handle, ae, TIMESTAMP_HIGH, 0); } @@ -438,6 +440,7 @@ static int qat_hal_init_esram(struct icp_qat_fw_loader_handle *handle) #define SHRAM_INIT_CYCLES 2060 int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle) { + unsigned long ae_mask = handle->hal_handle->ae_mask; unsigned int ae_reset_csr; unsigned char ae; unsigned int clk_csr; @@ -464,7 +467,7 @@ int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle) goto out_err; /* Set undefined power-up/reset states to reasonable default values */ - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { qat_hal_wr_ae_csr(handle, ae, CTX_ENABLES, INIT_CTX_ENABLE_VALUE); qat_hal_wr_indr_csr(handle, ae, ICP_QAT_UCLO_AE_ALL_CTX, @@ -570,10 +573,11 @@ static void qat_hal_enable_ctx(struct icp_qat_fw_loader_handle *handle, static void qat_hal_clear_xfer(struct icp_qat_fw_loader_handle *handle) { + unsigned long ae_mask = handle->hal_handle->ae_mask; unsigned char ae; unsigned short reg; - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { for (reg = 0; reg < ICP_QAT_UCLO_MAX_GPR_REG; reg++) { qat_hal_init_rd_xfer(handle, ae, 0, ICP_SR_RD_ABS, reg, 0); @@ -585,6 +589,7 @@ static void qat_hal_clear_xfer(struct icp_qat_fw_loader_handle *handle) static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle) { + unsigned long ae_mask = handle->hal_handle->ae_mask; unsigned char ae; unsigned int ctx_mask = ICP_QAT_UCLO_AE_ALL_CTX; int times = MAX_RETRY_TIMES; @@ -592,7 +597,7 @@ static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle) unsigned int savctx = 0; int ret = 0; - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { csr_val = qat_hal_rd_ae_csr(handle, ae, AE_MISC_CONTROL); csr_val &= ~(1 << MMC_SHARE_CS_BITPOS); qat_hal_wr_ae_csr(handle, ae, AE_MISC_CONTROL, csr_val); @@ -613,7 +618,7 @@ static int qat_hal_clear_gpr(struct icp_qat_fw_loader_handle *handle) qat_hal_wr_ae_csr(handle, ae, CTX_SIG_EVENTS_ACTIVE, 0); qat_hal_enable_ctx(handle, ae, ctx_mask); } - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { /* wait for AE to finish */ do { ret = qat_hal_wait_cycles(handle, ae, 20, 1); @@ -654,6 +659,8 @@ int qat_hal_init(struct adf_accel_dev *accel_dev) struct adf_hw_device_data *hw_data = accel_dev->hw_device; struct adf_bar *misc_bar = &pci_info->pci_bars[hw_data->get_misc_bar_id(hw_data)]; + unsigned long ae_mask = hw_data->ae_mask; + unsigned int csr_val = 0; struct adf_bar *sram_bar; handle = kzalloc(sizeof(*handle), GFP_KERNEL); @@ -689,9 +696,7 @@ int qat_hal_init(struct adf_accel_dev *accel_dev) /* create AE objects */ handle->hal_handle->upc_mask = 0x1ffff; handle->hal_handle->max_ustore = 0x4000; - for (ae = 0; ae < ICP_QAT_UCLO_MAX_AE; ae++) { - if (!(hw_data->ae_mask & (1 << ae))) - continue; + for_each_set_bit(ae, &ae_mask, ICP_QAT_UCLO_MAX_AE) { handle->hal_handle->aes[ae].free_addr = 0; handle->hal_handle->aes[ae].free_size = handle->hal_handle->max_ustore; @@ -714,9 +719,7 @@ int qat_hal_init(struct adf_accel_dev *accel_dev) } /* Set SIGNATURE_ENABLE[0] to 0x1 in order to enable ALU_OUT csr */ - for (ae = 0; ae < handle->hal_handle->ae_max_num; ae++) { - unsigned int csr_val = 0; - + for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { csr_val = qat_hal_rd_ae_csr(handle, ae, SIGNATURE_ENABLE); csr_val |= 0x1; qat_hal_wr_ae_csr(handle, ae, SIGNATURE_ENABLE, csr_val); diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index b975c263446d..6a0d01103136 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -24,15 +24,19 @@ static struct adf_hw_device_class dh895xcc_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { - return (~fuse) >> ADF_DH895XCC_ACCELERATORS_REG_OFFSET & - ADF_DH895XCC_ACCELERATORS_MASK; + u32 fuses = self->fuses; + + return ~fuses >> ADF_DH895XCC_ACCELERATORS_REG_OFFSET & + ADF_DH895XCC_ACCELERATORS_MASK; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { - return (~fuse) & ADF_DH895XCC_ACCELENGINES_MASK; + u32 fuses = self->fuses; + + return ~fuses & ADF_DH895XCC_ACCELENGINES_MASK; } static u32 get_num_accels(struct adf_hw_device_data *self) @@ -131,11 +135,13 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_device = accel_dev->hw_device; struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_DH895XCC_PMISC_BAR]; + unsigned long accel_mask = hw_device->accel_mask; + unsigned long ae_mask = hw_device->ae_mask; void __iomem *csr = misc_bar->virt_addr; unsigned int val, i; /* Enable Accel Engine error detection & correction */ - for (i = 0; i < hw_device->get_num_aes(hw_device); i++) { + for_each_set_bit(i, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) { val = ADF_CSR_RD(csr, ADF_DH895XCC_AE_CTX_ENABLES(i)); val |= ADF_DH895XCC_ENABLE_AE_ECC_ERR; ADF_CSR_WR(csr, ADF_DH895XCC_AE_CTX_ENABLES(i), val); @@ -145,7 +151,7 @@ static void adf_enable_error_correction(struct adf_accel_dev *accel_dev) } /* Enable shared memory error detection & correction */ - for (i = 0; i < hw_device->get_num_accels(hw_device); i++) { + for_each_set_bit(i, &accel_mask, ADF_DH895XCC_MAX_ACCELERATORS) { val = ADF_CSR_RD(csr, ADF_DH895XCC_UERRSSMSH(i)); val |= ADF_DH895XCC_ERRSSMSH_EN; ADF_CSR_WR(csr, ADF_DH895XCC_UERRSSMSH(i), val); diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c index ecb4f6f20e22..d7941bc2bafd 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c @@ -128,8 +128,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) &hw_data->fuses); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* If the device has no acceleration engines then ignore it. */ if (!hw_data->accel_mask || !hw_data->ae_mask || diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c index 5246f0524ca3..737f9132f71a 100644 --- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c @@ -11,12 +11,12 @@ static struct adf_hw_device_class dh895xcciov_class = { .instances = 0 }; -static u32 get_accel_mask(u32 fuse) +static u32 get_accel_mask(struct adf_hw_device_data *self) { return ADF_DH895XCCIOV_ACCELERATORS_MASK; } -static u32 get_ae_mask(u32 fuse) +static u32 get_ae_mask(struct adf_hw_device_data *self) { return ADF_DH895XCCIOV_ACCELENGINES_MASK; } diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c index 404cf9df6922..c972554a755e 100644 --- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c +++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c @@ -119,8 +119,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adf_init_hw_data_dh895xcciov(accel_dev->hw_device); /* Get Accelerators and Accelerators Engines masks */ - hw_data->accel_mask = hw_data->get_accel_mask(hw_data->fuses); - hw_data->ae_mask = hw_data->get_ae_mask(hw_data->fuses); + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); accel_pci_dev->sku = hw_data->get_sku(hw_data); /* Create dev top level debugfs entry */ From patchwork Mon Oct 12 20:38:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9503C433E7 for ; Mon, 12 Oct 2020 20:39:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98E9720FC3 for ; Mon, 12 Oct 2020 20:39:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728981AbgJLUjJ (ORCPT ); Mon, 12 Oct 2020 16:39:09 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjI (ORCPT ); Mon, 12 Oct 2020 16:39:08 -0400 IronPort-SDR: oYSLH1HjWXuznVy1EFJPyCWr2MyKFWuOcCewaKebTBrNgr/bZWNQy0Ru1AVp4ITNNjnQDy3yiT iSKL73QoKQsw== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913062" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913062" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:08 -0700 IronPort-SDR: odCxxjqI4kdPjNB1kZtU6dBdivOwgy1b9yB622APTxLJ0ev7dl4ovu0zz0s7LbcP+wPhcZvo8+ KIMcbUkQ9yxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328128" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:06 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Ahsan Atta , Giovanni Cabiddu , Fiona Trahe , Andy Shevchenko Subject: [PATCH 03/31] crypto: qat - num_rings_per_bank is device dependent Date: Mon, 12 Oct 2020 21:38:19 +0100 Message-Id: <20201012203847.340030-4-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Ahsan Atta This change is to allow support for QAT devices that may not have 16 rings per bank. The rings structure in bank is allocated dynamically based on the number of banks supported by a device. Note that in the error path in adf_init_bank(), ring->inflights is set to NULL after the free to silence a false positive double free reported by clang scan-build. Signed-off-by: Ahsan Atta Co-developed-by: Giovanni Cabiddu Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 + .../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 1 + .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 1 + .../qat/qat_c62xvf/adf_c62xvf_hw_data.c | 1 + .../crypto/qat/qat_common/adf_accel_devices.h | 3 ++ drivers/crypto/qat/qat_common/adf_transport.c | 42 +++++++++++++------ .../qat/qat_common/adf_transport_debug.c | 10 ++++- .../qat/qat_common/adf_transport_internal.h | 2 +- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 + .../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 1 + 10 files changed, 47 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index 4b2f5aa83391..62b0b290ff85 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -176,6 +176,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->dev_class = &c3xxx_class; hw_data->instance_id = c3xxx_class.instances++; hw_data->num_banks = ADF_C3XXX_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_C3XXX_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_C3XXX_MAX_ACCELENGINES; diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c index cdf8c500ef2a..80a355e85a72 100644 --- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c @@ -69,6 +69,7 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &c3xxxiov_class; hw_data->num_banks = ADF_C3XXXIOV_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_C3XXXIOV_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_C3XXXIOV_MAX_ACCELENGINES; diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index c0b5751e9682..1334b43e46e4 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -186,6 +186,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->dev_class = &c62x_class; hw_data->instance_id = c62x_class.instances++; hw_data->num_banks = ADF_C62X_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_C62X_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_C62X_MAX_ACCELENGINES; diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c index a2543f75e81f..7725387e58f8 100644 --- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c +++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c @@ -69,6 +69,7 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &c62xiov_class; hw_data->num_banks = ADF_C62XIOV_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_C62XIOV_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_C62XIOV_MAX_ACCELENGINES; diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 411a505e1f59..85b423d28f77 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -139,6 +139,7 @@ struct adf_hw_device_data { u16 tx_rings_mask; u8 tx_rx_gap; u8 num_banks; + u8 num_rings_per_bank; u8 num_accel; u8 num_logical_accel; u8 num_engines; @@ -156,6 +157,8 @@ struct adf_hw_device_data { #define GET_BARS(accel_dev) ((accel_dev)->accel_pci_dev.pci_bars) #define GET_HW_DATA(accel_dev) (accel_dev->hw_device) #define GET_MAX_BANKS(accel_dev) (GET_HW_DATA(accel_dev)->num_banks) +#define GET_NUM_RINGS_PER_BANK(accel_dev) \ + GET_HW_DATA(accel_dev)->num_rings_per_bank #define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines) #define accel_to_pci_dev(accel_ptr) accel_ptr->accel_pci_dev.pci_dev diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c index 2ad774017200..24ddaaaa55b1 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.c +++ b/drivers/crypto/qat/qat_common/adf_transport.c @@ -190,6 +190,7 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, struct adf_etr_ring_data **ring_ptr) { struct adf_etr_data *transport_data = accel_dev->transport; + u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(accel_dev); struct adf_etr_bank_data *bank; struct adf_etr_ring_data *ring; char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; @@ -219,7 +220,7 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, dev_err(&GET_DEV(accel_dev), "Can't get ring number\n"); return -EFAULT; } - if (ring_num >= ADF_ETR_MAX_RINGS_PER_BANK) { + if (ring_num >= num_rings_per_bank) { dev_err(&GET_DEV(accel_dev), "Invalid ring number\n"); return -EFAULT; } @@ -286,15 +287,15 @@ void adf_remove_ring(struct adf_etr_ring_data *ring) static void adf_ring_response_handler(struct adf_etr_bank_data *bank) { - u32 empty_rings, i; + u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(bank->accel_dev); + unsigned long empty_rings; + int i; empty_rings = READ_CSR_E_STAT(bank->csr_addr, bank->bank_number); empty_rings = ~empty_rings & bank->irq_mask; - for (i = 0; i < ADF_ETR_MAX_RINGS_PER_BANK; ++i) { - if (empty_rings & (1 << i)) - adf_handle_response(&bank->rings[i]); - } + for_each_set_bit(i, &empty_rings, num_rings_per_bank) + adf_handle_response(&bank->rings[i]); } void adf_response_handler(uintptr_t bank_addr) @@ -343,9 +344,12 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, u32 bank_num, void __iomem *csr_addr) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u8 num_rings_per_bank = hw_data->num_rings_per_bank; struct adf_etr_ring_data *ring; struct adf_etr_ring_data *tx_ring; u32 i, coalesc_enabled = 0; + unsigned long ring_mask; + int size; memset(bank, 0, sizeof(*bank)); bank->bank_number = bank_num; @@ -353,6 +357,13 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, bank->accel_dev = accel_dev; spin_lock_init(&bank->lock); + /* Allocate the rings in the bank */ + size = num_rings_per_bank * sizeof(struct adf_etr_ring_data); + bank->rings = kzalloc_node(size, GFP_KERNEL, + dev_to_node(&GET_DEV(accel_dev))); + if (!bank->rings) + return -ENOMEM; + /* Enable IRQ coalescing always. This will allow to use * the optimised flag and coalesc register. * If it is disabled in the config file just use min time value */ @@ -363,7 +374,7 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, else bank->irq_coalesc_timer = ADF_COALESCING_MIN_TIME; - for (i = 0; i < ADF_ETR_MAX_RINGS_PER_BANK; i++) { + for (i = 0; i < num_rings_per_bank; i++) { WRITE_CSR_RING_CONFIG(csr_addr, bank_num, i, 0); WRITE_CSR_RING_BASE(csr_addr, bank_num, i, 0); ring = &bank->rings[i]; @@ -394,11 +405,13 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, WRITE_CSR_INT_SRCSEL(csr_addr, bank_num); return 0; err: - for (i = 0; i < ADF_ETR_MAX_RINGS_PER_BANK; i++) { + ring_mask = hw_data->tx_rings_mask; + for_each_set_bit(i, &ring_mask, num_rings_per_bank) { ring = &bank->rings[i]; - if (hw_data->tx_rings_mask & (1 << i)) - kfree(ring->inflights); + kfree(ring->inflights); + ring->inflights = NULL; } + kfree(bank->rings); return -ENOMEM; } @@ -464,11 +477,12 @@ EXPORT_SYMBOL_GPL(adf_init_etr_data); static void cleanup_bank(struct adf_etr_bank_data *bank) { + struct adf_accel_dev *accel_dev = bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u8 num_rings_per_bank = hw_data->num_rings_per_bank; u32 i; - for (i = 0; i < ADF_ETR_MAX_RINGS_PER_BANK; i++) { - struct adf_accel_dev *accel_dev = bank->accel_dev; - struct adf_hw_device_data *hw_data = accel_dev->hw_device; + for (i = 0; i < num_rings_per_bank; i++) { struct adf_etr_ring_data *ring = &bank->rings[i]; if (bank->ring_mask & (1 << i)) @@ -477,6 +491,7 @@ static void cleanup_bank(struct adf_etr_bank_data *bank) if (hw_data->tx_rings_mask & (1 << i)) kfree(ring->inflights); } + kfree(bank->rings); adf_bank_debugfs_rm(bank); memset(bank, 0, sizeof(*bank)); } @@ -507,6 +522,7 @@ void adf_cleanup_etr_data(struct adf_accel_dev *accel_dev) if (etr_data) { adf_cleanup_etr_handles(accel_dev); debugfs_remove(etr_data->debug); + kfree(etr_data->banks->rings); kfree(etr_data->banks); kfree(etr_data); accel_dev->transport = NULL; diff --git a/drivers/crypto/qat/qat_common/adf_transport_debug.c b/drivers/crypto/qat/qat_common/adf_transport_debug.c index dac25ba47260..da79d734c035 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_debug.c +++ b/drivers/crypto/qat/qat_common/adf_transport_debug.c @@ -117,11 +117,14 @@ void adf_ring_debugfs_rm(struct adf_etr_ring_data *ring) static void *adf_bank_start(struct seq_file *sfile, loff_t *pos) { + struct adf_etr_bank_data *bank = sfile->private; + u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(bank->accel_dev); + mutex_lock(&bank_read_lock); if (*pos == 0) return SEQ_START_TOKEN; - if (*pos >= ADF_ETR_MAX_RINGS_PER_BANK) + if (*pos >= num_rings_per_bank) return NULL; return pos; @@ -129,7 +132,10 @@ static void *adf_bank_start(struct seq_file *sfile, loff_t *pos) static void *adf_bank_next(struct seq_file *sfile, void *v, loff_t *pos) { - if (++(*pos) >= ADF_ETR_MAX_RINGS_PER_BANK) + struct adf_etr_bank_data *bank = sfile->private; + u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(bank->accel_dev); + + if (++(*pos) >= num_rings_per_bank) return NULL; return pos; diff --git a/drivers/crypto/qat/qat_common/adf_transport_internal.h b/drivers/crypto/qat/qat_common/adf_transport_internal.h index c7faf4e2d302..501bcf0f1809 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_internal.h +++ b/drivers/crypto/qat/qat_common/adf_transport_internal.h @@ -28,7 +28,7 @@ struct adf_etr_ring_data { }; struct adf_etr_bank_data { - struct adf_etr_ring_data rings[ADF_ETR_MAX_RINGS_PER_BANK]; + struct adf_etr_ring_data *rings; struct tasklet_struct resp_handler; void __iomem *csr_addr; u32 irq_coalesc_timer; diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 6a0d01103136..1f3ea3ba1cee 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -185,6 +185,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->dev_class = &dh895xcc_class; hw_data->instance_id = dh895xcc_class.instances++; hw_data->num_banks = ADF_DH895XCC_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_DH895XCC_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_DH895XCC_MAX_ACCELENGINES; diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c index 737f9132f71a..eca144bc1d67 100644 --- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c @@ -69,6 +69,7 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &dh895xcciov_class; hw_data->num_banks = ADF_DH895XCCIOV_ETR_MAX_BANKS; + hw_data->num_rings_per_bank = ADF_ETR_MAX_RINGS_PER_BANK; hw_data->num_accel = ADF_DH895XCCIOV_MAX_ACCELERATORS; hw_data->num_logical_accel = 1; hw_data->num_engines = ADF_DH895XCCIOV_MAX_ACCELENGINES; From patchwork Mon Oct 12 20:38:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B65EC433DF for ; Mon, 12 Oct 2020 20:39:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E65512145D for ; Mon, 12 Oct 2020 20:39:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728992AbgJLUjL (ORCPT ); Mon, 12 Oct 2020 16:39:11 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjL (ORCPT ); Mon, 12 Oct 2020 16:39:11 -0400 IronPort-SDR: p9vL06LKz2BEA/+FAx0xA7B9W76uJXxu1PD39VmVCJ6mWFFcjNQn8EQHBanzRpmGEiFcL0eigu TUR0zYH9Zwfg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913066" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913066" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:10 -0700 IronPort-SDR: fxEIYzNnkxGIhQBuBfrV79A1Yo//Y7tTuC2FL4Zv3JlyWa64QSiETYbAMSrR2diLzAKZNdgKbP OiwrRBMlVgZw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328132" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:08 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 04/31] crypto: qat - fix configuration of iov threads Date: Mon, 12 Oct 2020 21:38:20 +0100 Message-Id: <20201012203847.340030-5-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The number of AE2FUNC_MAP registers is different in every QAT device (c62x, c3xxx and dh895xcc) although the logic and the register offsets are the same across devices. This patch separates the logic that configures the iov threads in a common function that takes as input the number of AE2FUNC_MAP registers supported by a device. The function is then added to the adf_hw_device_data structure of each device, and called with the appropriate parameters. The configure iov thread logic is added to a new file, adf_gen2_hw_data.c, that is going to contain code that is shared across QAT GEN2 devices. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 9 +++ .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h | 4 ++ .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 9 +++ .../crypto/qat/qat_c62x/adf_c62x_hw_data.h | 4 ++ drivers/crypto/qat/qat_common/Makefile | 1 + .../crypto/qat/qat_common/adf_accel_devices.h | 2 + .../crypto/qat/qat_common/adf_gen2_hw_data.c | 38 +++++++++++ .../crypto/qat/qat_common/adf_gen2_hw_data.h | 30 +++++++++ drivers/crypto/qat/qat_common/adf_sriov.c | 63 ++----------------- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 9 +++ .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.h | 5 ++ 11 files changed, 115 insertions(+), 59 deletions(-) create mode 100644 drivers/crypto/qat/qat_common/adf_gen2_hw_data.c create mode 100644 drivers/crypto/qat/qat_common/adf_gen2_hw_data.h diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index 62b0b290ff85..f449b2a0e82d 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_c3xxx_hw_data.h" /* Worker thread to service arbiter mappings based on dev SKUs */ @@ -171,6 +172,13 @@ static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) return 0; } +static void configure_iov_threads(struct adf_accel_dev *accel_dev, bool enable) +{ + adf_gen2_cfg_iov_thds(accel_dev, enable, + ADF_C3XXX_AE2FUNC_MAP_GRP_A_NUM_REGS, + ADF_C3XXX_AE2FUNC_MAP_GRP_B_NUM_REGS); +} + void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &c3xxx_class; @@ -199,6 +207,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->fw_mmp_name = ADF_C3XXX_MMP; hw_data->init_admin_comms = adf_init_admin_comms; hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->configure_iov_threads = configure_iov_threads; hw_data->disable_iov = adf_disable_sriov; hw_data->send_admin_init = adf_send_admin_init; hw_data->init_arb = adf_init_arb; diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h index 94097816f68a..fece8e38025a 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.h @@ -31,6 +31,10 @@ #define ADF_C3XXX_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i) * 0x04)) #define ADF_C3XXX_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i) * 0x04)) +/* AE to function mapping */ +#define ADF_C3XXX_AE2FUNC_MAP_GRP_A_NUM_REGS 48 +#define ADF_C3XXX_AE2FUNC_MAP_GRP_B_NUM_REGS 6 + /* Firmware Binary */ #define ADF_C3XXX_FW "qat_c3xxx.bin" #define ADF_C3XXX_MMP "qat_c3xxx_mmp.bin" diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index 1334b43e46e4..d7bed610ae86 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_c62x_hw_data.h" /* Worker thread to service arbiter mappings based on dev SKUs */ @@ -181,6 +182,13 @@ static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) return 0; } +static void configure_iov_threads(struct adf_accel_dev *accel_dev, bool enable) +{ + adf_gen2_cfg_iov_thds(accel_dev, enable, + ADF_C62X_AE2FUNC_MAP_GRP_A_NUM_REGS, + ADF_C62X_AE2FUNC_MAP_GRP_B_NUM_REGS); +} + void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &c62x_class; @@ -209,6 +217,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->fw_mmp_name = ADF_C62X_MMP; hw_data->init_admin_comms = adf_init_admin_comms; hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->configure_iov_threads = configure_iov_threads; hw_data->disable_iov = adf_disable_sriov; hw_data->send_admin_init = adf_send_admin_init; hw_data->init_arb = adf_init_arb; diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h index a2e2961a2102..53d3cb577f5b 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.h @@ -32,6 +32,10 @@ #define ADF_C62X_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i) * 0x04)) #define ADF_C62X_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i) * 0x04)) +/* AE to function mapping */ +#define ADF_C62X_AE2FUNC_MAP_GRP_A_NUM_REGS 80 +#define ADF_C62X_AE2FUNC_MAP_GRP_B_NUM_REGS 10 + /* Firmware Binary */ #define ADF_C62X_FW "qat_c62x.bin" #define ADF_C62X_MMP "qat_c62x_mmp.bin" diff --git a/drivers/crypto/qat/qat_common/Makefile b/drivers/crypto/qat/qat_common/Makefile index 47a8e3d8b81a..25d28516dcdd 100644 --- a/drivers/crypto/qat/qat_common/Makefile +++ b/drivers/crypto/qat/qat_common/Makefile @@ -10,6 +10,7 @@ intel_qat-objs := adf_cfg.o \ adf_transport.o \ adf_admin.o \ adf_hw_arbiter.o \ + adf_gen2_hw_data.o \ qat_crypto.o \ qat_algs.o \ qat_asym_algs.o \ diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 85b423d28f77..d7a27d15e137 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -125,6 +125,8 @@ struct adf_hw_device_data { void (*get_arb_mapping)(struct adf_accel_dev *accel_dev, const u32 **cfg); void (*disable_iov)(struct adf_accel_dev *accel_dev); + void (*configure_iov_threads)(struct adf_accel_dev *accel_dev, + bool enable); void (*enable_ints)(struct adf_accel_dev *accel_dev); int (*enable_vf2pf_comms)(struct adf_accel_dev *accel_dev); void (*reset_device)(struct adf_accel_dev *accel_dev); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c new file mode 100644 index 000000000000..26e345e3d7c3 --- /dev/null +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) +/* Copyright(c) 2020 Intel Corporation */ +#include "adf_gen2_hw_data.h" + +void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, + int num_a_regs, int num_b_regs) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + void __iomem *pmisc_addr; + struct adf_bar *pmisc; + int pmisc_id, i; + u32 reg; + + pmisc_id = hw_data->get_misc_bar_id(hw_data); + pmisc = &GET_BARS(accel_dev)[pmisc_id]; + pmisc_addr = pmisc->virt_addr; + + /* Set/Unset Valid bit in AE Thread to PCIe Function Mapping Group A */ + for (i = 0; i < num_a_regs; i++) { + reg = READ_CSR_AE2FUNCTION_MAP_A(pmisc_addr, i); + if (enable) + reg |= AE2FUNCTION_MAP_VALID; + else + reg &= ~AE2FUNCTION_MAP_VALID; + WRITE_CSR_AE2FUNCTION_MAP_A(pmisc_addr, i, reg); + } + + /* Set/Unset Valid bit in AE Thread to PCIe Function Mapping Group B */ + for (i = 0; i < num_b_regs; i++) { + reg = READ_CSR_AE2FUNCTION_MAP_B(pmisc_addr, i); + if (enable) + reg |= AE2FUNCTION_MAP_VALID; + else + reg &= ~AE2FUNCTION_MAP_VALID; + WRITE_CSR_AE2FUNCTION_MAP_B(pmisc_addr, i, reg); + } +} +EXPORT_SYMBOL_GPL(adf_gen2_cfg_iov_thds); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h new file mode 100644 index 000000000000..1d348425d5f4 --- /dev/null +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) */ +/* Copyright(c) 2020 Intel Corporation */ +#ifndef ADF_GEN2_HW_DATA_H_ +#define ADF_GEN2_HW_DATA_H_ + +#include "adf_accel_devices.h" + +/* AE to function map */ +#define AE2FUNCTION_MAP_A_OFFSET (0x3A400 + 0x190) +#define AE2FUNCTION_MAP_B_OFFSET (0x3A400 + 0x310) +#define AE2FUNCTION_MAP_REG_SIZE 4 +#define AE2FUNCTION_MAP_VALID BIT(7) + +#define READ_CSR_AE2FUNCTION_MAP_A(pmisc_bar_addr, index) \ + ADF_CSR_RD(pmisc_bar_addr, AE2FUNCTION_MAP_A_OFFSET + \ + AE2FUNCTION_MAP_REG_SIZE * (index)) +#define WRITE_CSR_AE2FUNCTION_MAP_A(pmisc_bar_addr, index, value) \ + ADF_CSR_WR(pmisc_bar_addr, AE2FUNCTION_MAP_A_OFFSET + \ + AE2FUNCTION_MAP_REG_SIZE * (index), value) +#define READ_CSR_AE2FUNCTION_MAP_B(pmisc_bar_addr, index) \ + ADF_CSR_RD(pmisc_bar_addr, AE2FUNCTION_MAP_B_OFFSET + \ + AE2FUNCTION_MAP_REG_SIZE * (index)) +#define WRITE_CSR_AE2FUNCTION_MAP_B(pmisc_bar_addr, index, value) \ + ADF_CSR_WR(pmisc_bar_addr, AE2FUNCTION_MAP_B_OFFSET + \ + AE2FUNCTION_MAP_REG_SIZE * (index), value) + +void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, + int num_a_regs, int num_b_regs); + +#endif diff --git a/drivers/crypto/qat/qat_common/adf_sriov.c b/drivers/crypto/qat/qat_common/adf_sriov.c index 963b2bea78f2..dde6c57ef15a 100644 --- a/drivers/crypto/qat/qat_common/adf_sriov.c +++ b/drivers/crypto/qat/qat_common/adf_sriov.c @@ -10,31 +10,6 @@ static struct workqueue_struct *pf2vf_resp_wq; -#define ME2FUNCTION_MAP_A_OFFSET (0x3A400 + 0x190) -#define ME2FUNCTION_MAP_A_NUM_REGS 96 - -#define ME2FUNCTION_MAP_B_OFFSET (0x3A400 + 0x310) -#define ME2FUNCTION_MAP_B_NUM_REGS 12 - -#define ME2FUNCTION_MAP_REG_SIZE 4 -#define ME2FUNCTION_MAP_VALID BIT(7) - -#define READ_CSR_ME2FUNCTION_MAP_A(pmisc_bar_addr, index) \ - ADF_CSR_RD(pmisc_bar_addr, ME2FUNCTION_MAP_A_OFFSET + \ - ME2FUNCTION_MAP_REG_SIZE * index) - -#define WRITE_CSR_ME2FUNCTION_MAP_A(pmisc_bar_addr, index, value) \ - ADF_CSR_WR(pmisc_bar_addr, ME2FUNCTION_MAP_A_OFFSET + \ - ME2FUNCTION_MAP_REG_SIZE * index, value) - -#define READ_CSR_ME2FUNCTION_MAP_B(pmisc_bar_addr, index) \ - ADF_CSR_RD(pmisc_bar_addr, ME2FUNCTION_MAP_B_OFFSET + \ - ME2FUNCTION_MAP_REG_SIZE * index) - -#define WRITE_CSR_ME2FUNCTION_MAP_B(pmisc_bar_addr, index, value) \ - ADF_CSR_WR(pmisc_bar_addr, ME2FUNCTION_MAP_B_OFFSET + \ - ME2FUNCTION_MAP_REG_SIZE * index, value) - struct adf_pf2vf_resp { struct work_struct pf2vf_resp_work; struct adf_accel_vf_info *vf_info; @@ -68,12 +43,8 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev) struct pci_dev *pdev = accel_to_pci_dev(accel_dev); int totalvfs = pci_sriov_get_totalvfs(pdev); struct adf_hw_device_data *hw_data = accel_dev->hw_device; - struct adf_bar *pmisc = - &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; - void __iomem *pmisc_addr = pmisc->virt_addr; struct adf_accel_vf_info *vf_info; int i; - u32 reg; for (i = 0, vf_info = accel_dev->pf.vf_info; i < totalvfs; i++, vf_info++) { @@ -90,19 +61,8 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev) DEFAULT_RATELIMIT_BURST); } - /* Set Valid bits in ME Thread to PCIe Function Mapping Group A */ - for (i = 0; i < ME2FUNCTION_MAP_A_NUM_REGS; i++) { - reg = READ_CSR_ME2FUNCTION_MAP_A(pmisc_addr, i); - reg |= ME2FUNCTION_MAP_VALID; - WRITE_CSR_ME2FUNCTION_MAP_A(pmisc_addr, i, reg); - } - - /* Set Valid bits in ME Thread to PCIe Function Mapping Group B */ - for (i = 0; i < ME2FUNCTION_MAP_B_NUM_REGS; i++) { - reg = READ_CSR_ME2FUNCTION_MAP_B(pmisc_addr, i); - reg |= ME2FUNCTION_MAP_VALID; - WRITE_CSR_ME2FUNCTION_MAP_B(pmisc_addr, i, reg); - } + /* Set Valid bits in AE Thread to PCIe Function Mapping */ + hw_data->configure_iov_threads(accel_dev, true); /* Enable VF to PF interrupts for all VFs */ adf_enable_vf2pf_interrupts(accel_dev, GENMASK_ULL(totalvfs - 1, 0)); @@ -127,12 +87,8 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev) void adf_disable_sriov(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; - struct adf_bar *pmisc = - &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; - void __iomem *pmisc_addr = pmisc->virt_addr; int totalvfs = pci_sriov_get_totalvfs(accel_to_pci_dev(accel_dev)); struct adf_accel_vf_info *vf; - u32 reg; int i; if (!accel_dev->pf.vf_info) @@ -145,19 +101,8 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev) /* Disable VF to PF interrupts */ adf_disable_vf2pf_interrupts(accel_dev, 0xFFFFFFFF); - /* Clear Valid bits in ME Thread to PCIe Function Mapping Group A */ - for (i = 0; i < ME2FUNCTION_MAP_A_NUM_REGS; i++) { - reg = READ_CSR_ME2FUNCTION_MAP_A(pmisc_addr, i); - reg &= ~ME2FUNCTION_MAP_VALID; - WRITE_CSR_ME2FUNCTION_MAP_A(pmisc_addr, i, reg); - } - - /* Clear Valid bits in ME Thread to PCIe Function Mapping Group B */ - for (i = 0; i < ME2FUNCTION_MAP_B_NUM_REGS; i++) { - reg = READ_CSR_ME2FUNCTION_MAP_B(pmisc_addr, i); - reg &= ~ME2FUNCTION_MAP_VALID; - WRITE_CSR_ME2FUNCTION_MAP_B(pmisc_addr, i, reg); - } + /* Clear Valid bits in AE Thread to PCIe Function Mapping */ + hw_data->configure_iov_threads(accel_dev, false); for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++) { tasklet_disable(&vf->vf2pf_bh_tasklet); diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 1f3ea3ba1cee..7b2f13ff49fd 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_dh895xcc_hw_data.h" /* Worker thread to service arbiter mappings based on dev SKUs */ @@ -180,6 +181,13 @@ static int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev) return 0; } +static void configure_iov_threads(struct adf_accel_dev *accel_dev, bool enable) +{ + adf_gen2_cfg_iov_thds(accel_dev, enable, + ADF_DH895XCC_AE2FUNC_MAP_GRP_A_NUM_REGS, + ADF_DH895XCC_AE2FUNC_MAP_GRP_B_NUM_REGS); +} + void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) { hw_data->dev_class = &dh895xcc_class; @@ -208,6 +216,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->fw_mmp_name = ADF_DH895XCC_MMP; hw_data->init_admin_comms = adf_init_admin_comms; hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->configure_iov_threads = configure_iov_threads; hw_data->disable_iov = adf_disable_sriov; hw_data->send_admin_init = adf_send_admin_init; hw_data->init_arb = adf_init_arb; diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h index 082a04466dca..4d613923d155 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.h @@ -36,6 +36,11 @@ #define ADF_DH895XCC_PF2VF_OFFSET(i) (0x3A000 + 0x280 + ((i) * 0x04)) #define ADF_DH895XCC_VINTMSK_OFFSET(i) (0x3A000 + 0x200 + ((i) * 0x04)) + +/* AE to function mapping */ +#define ADF_DH895XCC_AE2FUNC_MAP_GRP_A_NUM_REGS 96 +#define ADF_DH895XCC_AE2FUNC_MAP_GRP_B_NUM_REGS 12 + /* FW names */ #define ADF_DH895XCC_FW "qat_895xcc.bin" #define ADF_DH895XCC_MMP "qat_895xcc_mmp.bin" From patchwork Mon Oct 12 20:38:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B48DDC433DF for ; Mon, 12 Oct 2020 20:39:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F9EE2145D for ; Mon, 12 Oct 2020 20:39:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729044AbgJLUjO (ORCPT ); Mon, 12 Oct 2020 16:39:14 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjN (ORCPT ); Mon, 12 Oct 2020 16:39:13 -0400 IronPort-SDR: U8VTy0s5h2v73vBf4wFCqGsC8+qZQPEFtqveLeMWs8HSWmowoWiNffGAs6Zq4Kn2iXJQL4K/fp jUqyycBXs7NQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913071" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913071" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:12 -0700 IronPort-SDR: ADaZ+7+a+qvCMgOKVjMjVY/8t/lb1hpjjJx4JeqPQHzIvENN9UQtP2GbVvyOeUTJIlnyjLMCY9 l6Qer63VyCng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328135" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:10 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 05/31] crypto: qat - split transport CSR access logic Date: Mon, 12 Oct 2020 21:38:21 +0100 Message-Id: <20201012203847.340030-6-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Abstract access to transport CSRs and move generation specific code into adf_gen2_hw_data.c in preparation for the introduction of the qat_4xxx driver. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 + .../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 2 + .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 1 + .../qat/qat_c62xvf/adf_c62xvf_hw_data.c | 2 + .../crypto/qat/qat_common/adf_accel_devices.h | 27 ++++++ .../crypto/qat/qat_common/adf_gen2_hw_data.c | 85 ++++++++++++++++++ .../crypto/qat/qat_common/adf_gen2_hw_data.h | 1 + drivers/crypto/qat/qat_common/adf_isr.c | 4 +- drivers/crypto/qat/qat_common/adf_transport.c | 86 +++++++++++++------ .../qat/qat_common/adf_transport_debug.c | 22 ++--- drivers/crypto/qat/qat_common/adf_vf_isr.c | 5 +- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 + .../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 2 + 13 files changed, 198 insertions(+), 41 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index f449b2a0e82d..7af38b947cfe 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -217,6 +217,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; hw_data->reset_device = adf_reset_flr; hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_c3xxx(struct adf_hw_device_data *hw_data) diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c index 80a355e85a72..15f6b9bdfb22 100644 --- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_c3xxxvf_hw_data.h" static struct adf_hw_device_class c3xxxiov_class = { @@ -98,6 +99,7 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data) hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; hw_data->dev_class->instances++; adf_devmgr_update_class_index(hw_data); + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data) diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index d7bed610ae86..c18fb77dd8ec 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -227,6 +227,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; hw_data->reset_device = adf_reset_flr; hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_c62x(struct adf_hw_device_data *hw_data) diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c index 7725387e58f8..d231583428c9 100644 --- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c +++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_c62xvf_hw_data.h" static struct adf_hw_device_class c62xiov_class = { @@ -98,6 +99,7 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data) hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; hw_data->dev_class->instances++; adf_devmgr_update_class_index(hw_data); + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_c62xiov(struct adf_hw_device_data *hw_data) diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index d7a27d15e137..459e22076813 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -97,6 +97,31 @@ struct adf_hw_device_class { u32 instances; } __packed; +struct adf_hw_csr_ops { + u32 (*read_csr_ring_head)(void __iomem *csr_base_addr, u32 bank, + u32 ring); + void (*write_csr_ring_head)(void __iomem *csr_base_addr, u32 bank, + u32 ring, u32 value); + u32 (*read_csr_ring_tail)(void __iomem *csr_base_addr, u32 bank, + u32 ring); + void (*write_csr_ring_tail)(void __iomem *csr_base_addr, u32 bank, + u32 ring, u32 value); + u32 (*read_csr_e_stat)(void __iomem *csr_base_addr, u32 bank); + void (*write_csr_ring_config)(void __iomem *csr_base_addr, u32 bank, + u32 ring, u32 value); + void (*write_csr_ring_base)(void __iomem *csr_base_addr, u32 bank, + u32 ring, dma_addr_t addr); + void (*write_csr_int_flag)(void __iomem *csr_base_addr, u32 bank, + u32 value); + void (*write_csr_int_srcsel)(void __iomem *csr_base_addr, u32 bank); + void (*write_csr_int_col_en)(void __iomem *csr_base_addr, u32 bank, + u32 value); + void (*write_csr_int_col_ctl)(void __iomem *csr_base_addr, u32 bank, + u32 value); + void (*write_csr_int_flag_and_col)(void __iomem *csr_base_addr, + u32 bank, u32 value); +}; + struct adf_cfg_device_data; struct adf_accel_dev; struct adf_etr_data; @@ -130,6 +155,7 @@ struct adf_hw_device_data { void (*enable_ints)(struct adf_accel_dev *accel_dev); int (*enable_vf2pf_comms)(struct adf_accel_dev *accel_dev); void (*reset_device)(struct adf_accel_dev *accel_dev); + struct adf_hw_csr_ops csr_ops; const char *fw_name; const char *fw_mmp_name; u32 fuses; @@ -162,6 +188,7 @@ struct adf_hw_device_data { #define GET_NUM_RINGS_PER_BANK(accel_dev) \ GET_HW_DATA(accel_dev)->num_rings_per_bank #define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines) +#define GET_CSR_OPS(accel_dev) (&(accel_dev)->hw_device->csr_ops) #define accel_to_pci_dev(accel_ptr) accel_ptr->accel_pci_dev.pci_dev struct adf_admin_comms; diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index 26e345e3d7c3..07a9211bf7f9 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2020 Intel Corporation */ #include "adf_gen2_hw_data.h" +#include "adf_transport_access_macros.h" void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs) @@ -36,3 +37,87 @@ void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, } } EXPORT_SYMBOL_GPL(adf_gen2_cfg_iov_thds); + +static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring) +{ + return READ_CSR_RING_HEAD(csr_base_addr, bank, ring); +} + +static void write_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring, + u32 value) +{ + WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value); +} + +static u32 read_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring) +{ + return READ_CSR_RING_TAIL(csr_base_addr, bank, ring); +} + +static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring, + u32 value) +{ + WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value); +} + +static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank) +{ + return READ_CSR_E_STAT(csr_base_addr, bank); +} + +static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank, + u32 ring, u32 value) +{ + WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value); +} + +static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring, + dma_addr_t addr) +{ + WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr); +} + +static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank, u32 value) +{ + WRITE_CSR_INT_FLAG(csr_base_addr, bank, value); +} + +static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank) +{ + WRITE_CSR_INT_SRCSEL(csr_base_addr, bank); +} + +static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank, + u32 value) +{ + WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value); +} + +static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank, + u32 value) +{ + WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value); +} + +static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank, + u32 value) +{ + WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value); +} + +void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) +{ + csr_ops->read_csr_ring_head = read_csr_ring_head; + csr_ops->write_csr_ring_head = write_csr_ring_head; + csr_ops->read_csr_ring_tail = read_csr_ring_tail; + csr_ops->write_csr_ring_tail = write_csr_ring_tail; + csr_ops->read_csr_e_stat = read_csr_e_stat; + csr_ops->write_csr_ring_config = write_csr_ring_config; + csr_ops->write_csr_ring_base = write_csr_ring_base; + csr_ops->write_csr_int_flag = write_csr_int_flag; + csr_ops->write_csr_int_srcsel = write_csr_int_srcsel; + csr_ops->write_csr_int_col_en = write_csr_int_col_en; + csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl; + csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col; +} +EXPORT_SYMBOL_GPL(adf_gen2_init_hw_csr_ops); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index 1d348425d5f4..e6d3919a56a1 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -26,5 +26,6 @@ void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs); +void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); #endif diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c index 36136f7db509..5444f0ea0a1d 100644 --- a/drivers/crypto/qat/qat_common/adf_isr.c +++ b/drivers/crypto/qat/qat_common/adf_isr.c @@ -50,8 +50,10 @@ static void adf_disable_msix(struct adf_accel_pci *pci_dev_info) static irqreturn_t adf_msix_isr_bundle(int irq, void *bank_ptr) { struct adf_etr_bank_data *bank = bank_ptr; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); - WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number, 0); + csr_ops->write_csr_int_flag_and_col(bank->csr_addr, bank->bank_number, + 0); tasklet_hi_schedule(&bank->resp_handler); return IRQ_HANDLED; } diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c index 24ddaaaa55b1..03fb7812818b 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.c +++ b/drivers/crypto/qat/qat_common/adf_transport.c @@ -54,24 +54,32 @@ static void adf_unreserve_ring(struct adf_etr_bank_data *bank, u32 ring) static void adf_enable_ring_irq(struct adf_etr_bank_data *bank, u32 ring) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); + spin_lock_bh(&bank->lock); bank->irq_mask |= (1 << ring); spin_unlock_bh(&bank->lock); - WRITE_CSR_INT_COL_EN(bank->csr_addr, bank->bank_number, bank->irq_mask); - WRITE_CSR_INT_COL_CTL(bank->csr_addr, bank->bank_number, - bank->irq_coalesc_timer); + csr_ops->write_csr_int_col_en(bank->csr_addr, bank->bank_number, + bank->irq_mask); + csr_ops->write_csr_int_col_ctl(bank->csr_addr, bank->bank_number, + bank->irq_coalesc_timer); } static void adf_disable_ring_irq(struct adf_etr_bank_data *bank, u32 ring) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); + spin_lock_bh(&bank->lock); bank->irq_mask &= ~(1 << ring); spin_unlock_bh(&bank->lock); - WRITE_CSR_INT_COL_EN(bank->csr_addr, bank->bank_number, bank->irq_mask); + csr_ops->write_csr_int_col_en(bank->csr_addr, bank->bank_number, + bank->irq_mask); } int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(ring->bank->accel_dev); + if (atomic_add_return(1, ring->inflights) > ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size)) { atomic_dec(ring->inflights); @@ -84,14 +92,17 @@ int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg) ring->tail = adf_modulo(ring->tail + ADF_MSG_SIZE_TO_BYTES(ring->msg_size), ADF_RING_SIZE_MODULO(ring->ring_size)); - WRITE_CSR_RING_TAIL(ring->bank->csr_addr, ring->bank->bank_number, - ring->ring_number, ring->tail); + csr_ops->write_csr_ring_tail(ring->bank->csr_addr, + ring->bank->bank_number, ring->ring_number, + ring->tail); spin_unlock_bh(&ring->lock); + return 0; } static int adf_handle_response(struct adf_etr_ring_data *ring) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(ring->bank->accel_dev); u32 msg_counter = 0; u32 *msg = (u32 *)((uintptr_t)ring->base_addr + ring->head); @@ -105,30 +116,36 @@ static int adf_handle_response(struct adf_etr_ring_data *ring) msg_counter++; msg = (u32 *)((uintptr_t)ring->base_addr + ring->head); } - if (msg_counter > 0) - WRITE_CSR_RING_HEAD(ring->bank->csr_addr, - ring->bank->bank_number, - ring->ring_number, ring->head); + if (msg_counter > 0) { + csr_ops->write_csr_ring_head(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, ring->head); + } return 0; } static void adf_configure_tx_ring(struct adf_etr_ring_data *ring) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(ring->bank->accel_dev); u32 ring_config = BUILD_RING_CONFIG(ring->ring_size); - WRITE_CSR_RING_CONFIG(ring->bank->csr_addr, ring->bank->bank_number, - ring->ring_number, ring_config); + csr_ops->write_csr_ring_config(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, ring_config); + } static void adf_configure_rx_ring(struct adf_etr_ring_data *ring) { + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(ring->bank->accel_dev); u32 ring_config = BUILD_RESP_RING_CONFIG(ring->ring_size, ADF_RING_NEAR_WATERMARK_512, ADF_RING_NEAR_WATERMARK_0); - WRITE_CSR_RING_CONFIG(ring->bank->csr_addr, ring->bank->bank_number, - ring->ring_number, ring_config); + csr_ops->write_csr_ring_config(ring->bank->csr_addr, + ring->bank->bank_number, + ring->ring_number, ring_config); } static int adf_init_ring(struct adf_etr_ring_data *ring) @@ -136,6 +153,7 @@ static int adf_init_ring(struct adf_etr_ring_data *ring) struct adf_etr_bank_data *bank = ring->bank; struct adf_accel_dev *accel_dev = bank->accel_dev; struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); u64 ring_base; u32 ring_size_bytes = ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size); @@ -163,8 +181,9 @@ static int adf_init_ring(struct adf_etr_ring_data *ring) adf_configure_rx_ring(ring); ring_base = BUILD_RING_BASE_ADDR(ring->dma_addr, ring->ring_size); - WRITE_CSR_RING_BASE(ring->bank->csr_addr, ring->bank->bank_number, - ring->ring_number, ring_base); + csr_ops->write_csr_ring_base(ring->bank->csr_addr, + ring->bank->bank_number, ring->ring_number, + ring_base); spin_lock_init(&ring->lock); return 0; } @@ -269,15 +288,17 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, void adf_remove_ring(struct adf_etr_ring_data *ring) { struct adf_etr_bank_data *bank = ring->bank; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); /* Disable interrupts for the given ring */ adf_disable_ring_irq(bank, ring->ring_number); /* Clear PCI config space */ - WRITE_CSR_RING_CONFIG(bank->csr_addr, bank->bank_number, - ring->ring_number, 0); - WRITE_CSR_RING_BASE(bank->csr_addr, bank->bank_number, - ring->ring_number, 0); + + csr_ops->write_csr_ring_config(bank->csr_addr, bank->bank_number, + ring->ring_number, 0); + csr_ops->write_csr_ring_base(bank->csr_addr, bank->bank_number, + ring->ring_number, 0); adf_ring_debugfs_rm(ring); adf_unreserve_ring(bank, ring->ring_number); /* Disable HW arbitration for the given ring */ @@ -287,11 +308,14 @@ void adf_remove_ring(struct adf_etr_ring_data *ring) static void adf_ring_response_handler(struct adf_etr_bank_data *bank) { - u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(bank->accel_dev); + struct adf_accel_dev *accel_dev = bank->accel_dev; + u8 num_rings_per_bank = GET_NUM_RINGS_PER_BANK(accel_dev); + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); unsigned long empty_rings; int i; - empty_rings = READ_CSR_E_STAT(bank->csr_addr, bank->bank_number); + empty_rings = csr_ops->read_csr_e_stat(bank->csr_addr, + bank->bank_number); empty_rings = ~empty_rings & bank->irq_mask; for_each_set_bit(i, &empty_rings, num_rings_per_bank) @@ -301,11 +325,13 @@ static void adf_ring_response_handler(struct adf_etr_bank_data *bank) void adf_response_handler(uintptr_t bank_addr) { struct adf_etr_bank_data *bank = (void *)bank_addr; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); /* Handle all the responses and reenable IRQs */ adf_ring_response_handler(bank); - WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number, - bank->irq_mask); + + csr_ops->write_csr_int_flag_and_col(bank->csr_addr, bank->bank_number, + bank->irq_mask); } static inline int adf_get_cfg_int(struct adf_accel_dev *accel_dev, @@ -345,6 +371,7 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, { struct adf_hw_device_data *hw_data = accel_dev->hw_device; u8 num_rings_per_bank = hw_data->num_rings_per_bank; + struct adf_hw_csr_ops *csr_ops = &hw_data->csr_ops; struct adf_etr_ring_data *ring; struct adf_etr_ring_data *tx_ring; u32 i, coalesc_enabled = 0; @@ -375,8 +402,9 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, bank->irq_coalesc_timer = ADF_COALESCING_MIN_TIME; for (i = 0; i < num_rings_per_bank; i++) { - WRITE_CSR_RING_CONFIG(csr_addr, bank_num, i, 0); - WRITE_CSR_RING_BASE(csr_addr, bank_num, i, 0); + csr_ops->write_csr_ring_config(csr_addr, bank_num, i, 0); + csr_ops->write_csr_ring_base(csr_addr, bank_num, i, 0); + ring = &bank->rings[i]; if (hw_data->tx_rings_mask & (1 << i)) { ring->inflights = @@ -401,8 +429,10 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, goto err; } - WRITE_CSR_INT_FLAG(csr_addr, bank_num, ADF_BANK_INT_FLAG_CLEAR_MASK); - WRITE_CSR_INT_SRCSEL(csr_addr, bank_num); + csr_ops->write_csr_int_flag(csr_addr, bank_num, + ADF_BANK_INT_FLAG_CLEAR_MASK); + csr_ops->write_csr_int_srcsel(csr_addr, bank_num); + return 0; err: ring_mask = hw_data->tx_rings_mask; diff --git a/drivers/crypto/qat/qat_common/adf_transport_debug.c b/drivers/crypto/qat/qat_common/adf_transport_debug.c index da79d734c035..1205186ad51e 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_debug.c +++ b/drivers/crypto/qat/qat_common/adf_transport_debug.c @@ -42,16 +42,17 @@ static int adf_ring_show(struct seq_file *sfile, void *v) { struct adf_etr_ring_data *ring = sfile->private; struct adf_etr_bank_data *bank = ring->bank; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); void __iomem *csr = ring->bank->csr_addr; if (v == SEQ_START_TOKEN) { int head, tail, empty; - head = READ_CSR_RING_HEAD(csr, bank->bank_number, - ring->ring_number); - tail = READ_CSR_RING_TAIL(csr, bank->bank_number, - ring->ring_number); - empty = READ_CSR_E_STAT(csr, bank->bank_number); + head = csr_ops->read_csr_ring_head(csr, bank->bank_number, + ring->ring_number); + tail = csr_ops->read_csr_ring_tail(csr, bank->bank_number, + ring->ring_number); + empty = csr_ops->read_csr_e_stat(csr, bank->bank_number); seq_puts(sfile, "------- Ring configuration -------\n"); seq_printf(sfile, "ring name: %s\n", @@ -144,6 +145,7 @@ static void *adf_bank_next(struct seq_file *sfile, void *v, loff_t *pos) static int adf_bank_show(struct seq_file *sfile, void *v) { struct adf_etr_bank_data *bank = sfile->private; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(bank->accel_dev); if (v == SEQ_START_TOKEN) { seq_printf(sfile, "------- Bank %d configuration -------\n", @@ -157,11 +159,11 @@ static int adf_bank_show(struct seq_file *sfile, void *v) if (!(bank->ring_mask & 1 << ring_id)) return 0; - head = READ_CSR_RING_HEAD(csr, bank->bank_number, - ring->ring_number); - tail = READ_CSR_RING_TAIL(csr, bank->bank_number, - ring->ring_number); - empty = READ_CSR_E_STAT(csr, bank->bank_number); + head = csr_ops->read_csr_ring_head(csr, bank->bank_number, + ring->ring_number); + tail = csr_ops->read_csr_ring_tail(csr, bank->bank_number, + ring->ring_number); + empty = csr_ops->read_csr_e_stat(csr, bank->bank_number); seq_printf(sfile, "ring num %02d, head %04x, tail %04x, empty: %d\n", diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c index c4a44dc6af3e..38d316a42ba6 100644 --- a/drivers/crypto/qat/qat_common/adf_vf_isr.c +++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c @@ -156,6 +156,7 @@ static irqreturn_t adf_isr(int irq, void *privdata) { struct adf_accel_dev *accel_dev = privdata; struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_hw_csr_ops *csr_ops = &hw_data->csr_ops; struct adf_bar *pmisc = &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; void __iomem *pmisc_bar_addr = pmisc->virt_addr; @@ -180,8 +181,8 @@ static irqreturn_t adf_isr(int irq, void *privdata) struct adf_etr_bank_data *bank = &etr_data->banks[0]; /* Disable Flag and Coalesce Ring Interrupts */ - WRITE_CSR_INT_FLAG_AND_COL(bank->csr_addr, bank->bank_number, - 0); + csr_ops->write_csr_int_flag_and_col(bank->csr_addr, + bank->bank_number, 0); tasklet_hi_schedule(&bank->resp_handler); return IRQ_HANDLED; } diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 7b2f13ff49fd..39423316664b 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -226,6 +226,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->enable_vf2pf_comms = adf_pf_enable_vf2pf_comms; hw_data->reset_device = adf_reset_sbr; hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c index eca144bc1d67..f14fb82ed6df 100644 --- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c @@ -3,6 +3,7 @@ #include #include #include +#include #include "adf_dh895xccvf_hw_data.h" static struct adf_hw_device_class dh895xcciov_class = { @@ -98,6 +99,7 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data) hw_data->min_iov_compat_ver = ADF_PFVF_COMPATIBILITY_VERSION; hw_data->dev_class->instances++; adf_devmgr_update_class_index(hw_data); + adf_gen2_init_hw_csr_ops(&hw_data->csr_ops); } void adf_clean_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data) From patchwork Mon Oct 12 20:38:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 820D9C433DF for ; Mon, 12 Oct 2020 20:39:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47618208D5 for ; Mon, 12 Oct 2020 20:39:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729578AbgJLUjQ (ORCPT ); Mon, 12 Oct 2020 16:39:16 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjO (ORCPT ); Mon, 12 Oct 2020 16:39:14 -0400 IronPort-SDR: gM7mF6YBgd0toe8aoeCGx9E4mmeVsVnMpFXh+GgCCLaWBNhTnRewYCk6sV0xRjZ0a2VYU9wufP NInnQVlGAPow== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913075" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913075" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:14 -0700 IronPort-SDR: ULtCZEexEKlnpqAzvymPQV1qtkAx6bdvOIq2lwP/RAb9Xv0Npg9rUW6w0ohF13OdDJxVnnCSeH CFZ6RFRsiGfg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328146" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:12 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 06/31] crypto: qat - relocate GEN2 CSR access code Date: Mon, 12 Oct 2020 21:38:22 +0100 Message-Id: <20201012203847.340030-7-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Move gen2 specific transport macros to adf_gen2_hw_data.c. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_common/adf_gen2_hw_data.c | 1 - .../crypto/qat/qat_common/adf_gen2_hw_data.h | 68 +++++++++++++++++++ .../qat_common/adf_transport_access_macros.h | 64 ----------------- 3 files changed, 68 insertions(+), 65 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index 07a9211bf7f9..9011c94156a9 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2020 Intel Corporation */ #include "adf_gen2_hw_data.h" -#include "adf_transport_access_macros.h" void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs) diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index e6d3919a56a1..592aee627762 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -5,6 +5,74 @@ #include "adf_accel_devices.h" +/* Transport access */ +#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL +#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL +#define ADF_RING_CSR_RING_CONFIG 0x000 +#define ADF_RING_CSR_RING_LBASE 0x040 +#define ADF_RING_CSR_RING_UBASE 0x080 +#define ADF_RING_CSR_RING_HEAD 0x0C0 +#define ADF_RING_CSR_RING_TAIL 0x100 +#define ADF_RING_CSR_E_STAT 0x14C +#define ADF_RING_CSR_INT_FLAG 0x170 +#define ADF_RING_CSR_INT_SRCSEL 0x174 +#define ADF_RING_CSR_INT_SRCSEL_2 0x178 +#define ADF_RING_CSR_INT_COL_EN 0x17C +#define ADF_RING_CSR_INT_COL_CTL 0x180 +#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184 +#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000 +#define ADF_RING_BUNDLE_SIZE 0x1000 + +#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2)) +#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2)) +#define READ_CSR_E_STAT(csr_base_addr, bank) \ + ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_E_STAT) +#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_CONFIG + ((ring) << 2), value) +#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \ +do { \ + u32 l_base = 0, u_base = 0; \ + l_base = (u32)((value) & 0xFFFFFFFF); \ + u_base = (u32)(((value) & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_LBASE + ((ring) << 2), l_base); \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_UBASE + ((ring) << 2), u_base); \ +} while (0) + +#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2), value) +#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2), value) +#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_FLAG, value) +#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \ +do { \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0); \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \ +} while (0) +#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_COL_EN, value) +#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_COL_CTL, \ + ADF_RING_CSR_INT_COL_CTL_ENABLE | (value)) +#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ + ADF_RING_CSR_INT_FLAG_AND_COL, value) + /* AE to function map */ #define AE2FUNCTION_MAP_A_OFFSET (0x3A400 + 0x190) #define AE2FUNCTION_MAP_B_OFFSET (0x3A400 + 0x310) diff --git a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h index 950d1988556c..4642b0b5cfb0 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h +++ b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h @@ -4,23 +4,7 @@ #define ADF_TRANSPORT_ACCESS_MACROS_H #include "adf_accel_devices.h" -#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL -#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL #define ADF_BANK_INT_FLAG_CLEAR_MASK 0xFFFF -#define ADF_RING_CSR_RING_CONFIG 0x000 -#define ADF_RING_CSR_RING_LBASE 0x040 -#define ADF_RING_CSR_RING_UBASE 0x080 -#define ADF_RING_CSR_RING_HEAD 0x0C0 -#define ADF_RING_CSR_RING_TAIL 0x100 -#define ADF_RING_CSR_E_STAT 0x14C -#define ADF_RING_CSR_INT_FLAG 0x170 -#define ADF_RING_CSR_INT_SRCSEL 0x174 -#define ADF_RING_CSR_INT_SRCSEL_2 0x178 -#define ADF_RING_CSR_INT_COL_EN 0x17C -#define ADF_RING_CSR_INT_COL_CTL 0x180 -#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184 -#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000 -#define ADF_RING_BUNDLE_SIZE 0x1000 #define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A #define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05 #define ADF_COALESCING_MIN_TIME 0x1FF @@ -74,52 +58,4 @@ | size) #define BUILD_RING_BASE_ADDR(addr, size) \ ((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size)) -#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ - ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_HEAD + (ring << 2)) -#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \ - ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_TAIL + (ring << 2)) -#define READ_CSR_E_STAT(csr_base_addr, bank) \ - ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_E_STAT) -#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_CONFIG + (ring << 2), value) -#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \ -do { \ - u32 l_base = 0, u_base = 0; \ - l_base = (u32)(value & 0xFFFFFFFF); \ - u_base = (u32)((value & 0xFFFFFFFF00000000ULL) >> 32); \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_LBASE + (ring << 2), l_base); \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_UBASE + (ring << 2), u_base); \ -} while (0) -#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_HEAD + (ring << 2), value) -#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_RING_TAIL + (ring << 2), value) -#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ - ADF_RING_CSR_INT_FLAG, value) -#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \ -do { \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0); \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \ -} while (0) -#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_INT_COL_EN, value) -#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_INT_COL_CTL, \ - ADF_RING_CSR_INT_COL_CTL_ENABLE | value) -#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \ - ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) + \ - ADF_RING_CSR_INT_FLAG_AND_COL, value) #endif From patchwork Mon Oct 12 20:38:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89679C433DF for ; Mon, 12 Oct 2020 20:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C625208D5 for ; Mon, 12 Oct 2020 20:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729686AbgJLUjS (ORCPT ); Mon, 12 Oct 2020 16:39:18 -0400 Received: from mga09.intel.com ([134.134.136.24]:33938 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729539AbgJLUjR (ORCPT ); Mon, 12 Oct 2020 16:39:17 -0400 IronPort-SDR: fZ3nsDnQDrsz3NY8IZbJlHI5p3rzwEGLPCJxgI0SeDTbP+RUifK3bqFo1TI0qIbo6xzgwvYa+9 wo2cklSQbyJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913082" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913082" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:16 -0700 IronPort-SDR: 1U8/sDb1kI5Vem81YLHtcvURvlw+nwM1QJ4v2gydSCfaNpQG8He1QMoCvo5j2sSIV6JpfIGBx1 Mdgmm8rpUAxw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328151" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:14 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 07/31] crypto: qat - abstract admin interface Date: Mon, 12 Oct 2020 21:38:23 +0100 Message-Id: <20201012203847.340030-8-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Abstract access to admin interface and move generation specific code into adf_gen2_hw_data.c in preparation for the introduction of the qat_4xxx driver. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 + .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 1 + .../crypto/qat/qat_common/adf_accel_devices.h | 7 ++++++ drivers/crypto/qat/qat_common/adf_admin.c | 25 +++++++++++-------- .../crypto/qat/qat_common/adf_gen2_hw_data.c | 8 ++++++ .../crypto/qat/qat_common/adf_gen2_hw_data.h | 6 +++++ .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 + 7 files changed, 39 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index 7af38b947cfe..f72ed415800e 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -202,6 +202,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->get_misc_bar_id = get_misc_bar_id; hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_admin_info = adf_gen2_get_admin_info; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_C3XXX_FW; hw_data->fw_mmp_name = ADF_C3XXX_MMP; diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index c18fb77dd8ec..d4443523dc9d 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -212,6 +212,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->get_misc_bar_id = get_misc_bar_id; hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_admin_info = adf_gen2_get_admin_info; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_C62X_FW; hw_data->fw_mmp_name = ADF_C62X_MMP; diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 459e22076813..5f57850c2e8d 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -97,6 +97,12 @@ struct adf_hw_device_class { u32 instances; } __packed; +struct admin_info { + u32 admin_msg_ur; + u32 admin_msg_lr; + u32 mailbox_offset; +}; + struct adf_hw_csr_ops { u32 (*read_csr_ring_head)(void __iomem *csr_base_addr, u32 bank, u32 ring); @@ -138,6 +144,7 @@ struct adf_hw_device_data { u32 (*get_num_accels)(struct adf_hw_device_data *self); u32 (*get_pf2vf_offset)(u32 i); u32 (*get_vintmsk_offset)(u32 i); + void (*get_admin_info)(struct admin_info *admin_csrs_info); enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self); int (*alloc_irq)(struct adf_accel_dev *accel_dev); void (*free_irq)(struct adf_accel_dev *accel_dev); diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c index ec9b390276d6..3ae7c89ce82a 100644 --- a/drivers/crypto/qat/qat_common/adf_admin.c +++ b/drivers/crypto/qat/qat_common/adf_admin.c @@ -10,11 +10,7 @@ #include "adf_common_drv.h" #include "icp_qat_fw_init_admin.h" -/* Admin Messages Registers */ -#define ADF_DH895XCC_ADMINMSGUR_OFFSET (0x3A000 + 0x574) -#define ADF_DH895XCC_ADMINMSGLR_OFFSET (0x3A000 + 0x578) -#define ADF_DH895XCC_MAILBOX_BASE_OFFSET 0x20970 -#define ADF_DH895XCC_MAILBOX_STRIDE 0x1000 +#define ADF_ADMIN_MAILBOX_STRIDE 0x1000 #define ADF_ADMINMSG_LEN 32 #define ADF_CONST_TABLE_SIZE 1024 #define ADF_ADMIN_POLL_DELAY_US 20 @@ -118,7 +114,7 @@ static int adf_put_admin_msg_sync(struct adf_accel_dev *accel_dev, u32 ae, struct adf_admin_comms *admin = accel_dev->admin; int offset = ae * ADF_ADMINMSG_LEN * 2; void __iomem *mailbox = admin->mailbox_addr; - int mb_offset = ae * ADF_DH895XCC_MAILBOX_STRIDE; + int mb_offset = ae * ADF_ADMIN_MAILBOX_STRIDE; struct icp_qat_fw_init_admin_req *request = in; mutex_lock(&admin->lock); @@ -225,8 +221,9 @@ int adf_init_admin_comms(struct adf_accel_dev *accel_dev) struct adf_bar *pmisc = &GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)]; void __iomem *csr = pmisc->virt_addr; - void __iomem *mailbox = (void __iomem *)((uintptr_t)csr + - ADF_DH895XCC_MAILBOX_BASE_OFFSET); + struct admin_info admin_csrs_info; + u32 mailbox_offset, adminmsg_u, adminmsg_l; + void __iomem *mailbox; u64 reg_val; admin = kzalloc_node(sizeof(*accel_dev->admin), GFP_KERNEL, @@ -254,9 +251,17 @@ int adf_init_admin_comms(struct adf_accel_dev *accel_dev) } memcpy(admin->virt_tbl_addr, const_tab, sizeof(const_tab)); + hw_data->get_admin_info(&admin_csrs_info); + + mailbox_offset = admin_csrs_info.mailbox_offset; + mailbox = (void __iomem *)((uintptr_t)csr + mailbox_offset); + adminmsg_u = admin_csrs_info.admin_msg_ur; + adminmsg_l = admin_csrs_info.admin_msg_lr; + reg_val = (u64)admin->phy_addr; - ADF_CSR_WR(csr, ADF_DH895XCC_ADMINMSGUR_OFFSET, reg_val >> 32); - ADF_CSR_WR(csr, ADF_DH895XCC_ADMINMSGLR_OFFSET, reg_val); + ADF_CSR_WR(csr, adminmsg_u, upper_32_bits(reg_val)); + ADF_CSR_WR(csr, adminmsg_l, lower_32_bits(reg_val)); + mutex_init(&admin->lock); admin->mailbox_addr = mailbox; accel_dev->admin = admin; diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index 9011c94156a9..15a0bc921d7e 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -37,6 +37,14 @@ void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, } EXPORT_SYMBOL_GPL(adf_gen2_cfg_iov_thds); +void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_ADMINMSGLR_OFFSET; +} +EXPORT_SYMBOL_GPL(adf_gen2_get_admin_info); + static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring) { return READ_CSR_RING_HEAD(csr_base_addr, bank, ring); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index 592aee627762..e9d2591b2be8 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -92,8 +92,14 @@ do { \ ADF_CSR_WR(pmisc_bar_addr, AE2FUNCTION_MAP_B_OFFSET + \ AE2FUNCTION_MAP_REG_SIZE * (index), value) +/* Admin Interface Offsets */ +#define ADF_ADMINMSGUR_OFFSET (0x3A000 + 0x574) +#define ADF_ADMINMSGLR_OFFSET (0x3A000 + 0x578) +#define ADF_MAILBOX_BASE_OFFSET 0x20970 + void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs); void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); +void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info); #endif diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 39423316664b..c568e9808cec 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -210,6 +210,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->get_misc_bar_id = get_misc_bar_id; hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; + hw_data->get_admin_info = adf_gen2_get_admin_info; hw_data->get_sram_bar_id = get_sram_bar_id; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_DH895XCC_FW; From patchwork Mon Oct 12 20:38:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAFFEC43457 for ; Mon, 12 Oct 2020 20:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6660E214DB for ; Mon, 12 Oct 2020 20:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729539AbgJLUjS (ORCPT ); Mon, 12 Oct 2020 16:39:18 -0400 Received: from mga09.intel.com ([134.134.136.24]:33900 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjS (ORCPT ); Mon, 12 Oct 2020 16:39:18 -0400 IronPort-SDR: RkaLEVI4JHZn62UYp4UDUu9vgh8DSlm1gh95C8vrlFs15kINSpeBeCXqFqidRWiPJ/+mst5OFi TqQI+s8Cvnow== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913086" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913086" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:18 -0700 IronPort-SDR: aVsqO9JZ4tUZkjM4HSLngfPazOmGb6wv6FdT17UIi/lkWB+vLoi2Pm9RsWP9QcJlms+X8bphGG kzOPCCRFO9gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328157" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:16 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 08/31] crypto: qat - add packed to init admin structures Date: Mon, 12 Oct 2020 21:38:24 +0100 Message-Id: <20201012203847.340030-9-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add packed attribute to the structures icp_qat_fw_init_admin_req and icp_qat_fw_init_admin_resp as they are accessed by firmware. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h b/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h index d4d188cd7ed0..3868bcbed252 100644 --- a/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h +++ b/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h @@ -39,7 +39,7 @@ struct icp_qat_fw_init_admin_req { }; __u32 resrvd4; -}; +} __packed; struct icp_qat_fw_init_admin_resp { __u8 flags; @@ -92,7 +92,7 @@ struct icp_qat_fw_init_admin_resp { __u64 resrvd8; }; }; -}; +} __packed; #define ICP_QAT_FW_COMN_HEARTBEAT_OK 0 #define ICP_QAT_FW_COMN_HEARTBEAT_BLOCKED 1 From patchwork Mon Oct 12 20:38:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93B44C433E7 for ; Mon, 12 Oct 2020 20:39:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5704220BED for ; Mon, 12 Oct 2020 20:39:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730049AbgJLUjV (ORCPT ); Mon, 12 Oct 2020 16:39:21 -0400 Received: from mga09.intel.com ([134.134.136.24]:33953 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjU (ORCPT ); Mon, 12 Oct 2020 16:39:20 -0400 IronPort-SDR: J0Glf7CwbXYru1GwbTOGZpAwCUSCKkhPoONwG3t7T1Xv+oUKQ0CJs/ULFKyUCWP+uXES0eGSbt m7DB0lrJPmqw== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913093" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913093" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:19 -0700 IronPort-SDR: n1NqhKUMpGir7bzHKyAni0axBg5PXuPWSZFUru34RILqzVKjP+OZb0Z7hg30Iz/YeGbUsQoPIq ewHCz+9YYaSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328162" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:18 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 09/31] crypto: qat - rename ME in AE Date: Mon, 12 Oct 2020 21:38:25 +0100 Message-Id: <20201012203847.340030-10-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename occurrences of ME in the admin module with the acronym AE (Acceleration Engine) as the two are equivalent. This is to keep a single acronym for engined in the codebase and follow the documentation in https://01.org/intel-quickassist-technology. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_admin.c | 6 +++--- drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c index 3ae7c89ce82a..13a5e8659682 100644 --- a/drivers/crypto/qat/qat_common/adf_admin.c +++ b/drivers/crypto/qat/qat_common/adf_admin.c @@ -163,7 +163,7 @@ static int adf_send_admin(struct adf_accel_dev *accel_dev, return 0; } -static int adf_init_me(struct adf_accel_dev *accel_dev) +static int adf_init_ae(struct adf_accel_dev *accel_dev) { struct icp_qat_fw_init_admin_req req; struct icp_qat_fw_init_admin_resp resp; @@ -172,7 +172,7 @@ static int adf_init_me(struct adf_accel_dev *accel_dev) memset(&req, 0, sizeof(req)); memset(&resp, 0, sizeof(resp)); - req.cmd_id = ICP_QAT_FW_INIT_ME; + req.cmd_id = ICP_QAT_FW_INIT_AE; return adf_send_admin(accel_dev, &req, &resp, ae_mask); } @@ -206,7 +206,7 @@ int adf_send_admin_init(struct adf_accel_dev *accel_dev) { int ret; - ret = adf_init_me(accel_dev); + ret = adf_init_ae(accel_dev); if (ret) return ret; diff --git a/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h b/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h index 3868bcbed252..f05ad17fbdd6 100644 --- a/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h +++ b/drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h @@ -6,7 +6,7 @@ #include "icp_qat_fw.h" enum icp_qat_fw_init_admin_cmd_id { - ICP_QAT_FW_INIT_ME = 0, + ICP_QAT_FW_INIT_AE = 0, ICP_QAT_FW_TRNG_ENABLE = 1, ICP_QAT_FW_TRNG_DISABLE = 2, ICP_QAT_FW_CONSTANTS_CFG = 3, From patchwork Mon Oct 12 20:38:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05138C433DF for ; Mon, 12 Oct 2020 20:39:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D238E20FC3 for ; Mon, 12 Oct 2020 20:39:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730531AbgJLUjX (ORCPT ); Mon, 12 Oct 2020 16:39:23 -0400 Received: from mga09.intel.com ([134.134.136.24]:33953 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgJLUjW (ORCPT ); Mon, 12 Oct 2020 16:39:22 -0400 IronPort-SDR: teJyB9BoabldI6rHLpEPIiQLo2NoSrE7pPqm/Fj4TrJ7mzoDKN+NMjijSd8JbaSP5RzWu519mA 08IICZzuia5g== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913103" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913103" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:21 -0700 IronPort-SDR: WQDuRONGVPFGnwb3WORjLM8EIqbEg93TJvES/ros78480lh28dbMvxl7eRJ2JpFcZ0RlD65pAf 61/+GHIPoLJg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328168" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:20 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 10/31] crypto: qat - change admin sequence Date: Mon, 12 Oct 2020 21:38:26 +0100 Message-Id: <20201012203847.340030-11-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Call adf_set_fw_constants() before adf_init_ae(). This is required by QAT GEN4 devices, which expect that the FW_CONSTANTS_CFG command is sent to the admin AEs before the FW_INIT_AE command. Swapping the order of the two commands (FW_INIT_AE and FW_CONSTANTS_CFG) is allowed in QAT GEN2 devices as the firmware can handle those in any order. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_admin.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c index 13a5e8659682..6d94746d266f 100644 --- a/drivers/crypto/qat/qat_common/adf_admin.c +++ b/drivers/crypto/qat/qat_common/adf_admin.c @@ -206,11 +206,11 @@ int adf_send_admin_init(struct adf_accel_dev *accel_dev) { int ret; - ret = adf_init_ae(accel_dev); + ret = adf_set_fw_constants(accel_dev); if (ret) return ret; - return adf_set_fw_constants(accel_dev); + return adf_init_ae(accel_dev); } EXPORT_SYMBOL_GPL(adf_send_admin_init); From patchwork Mon Oct 12 20:38:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E0ABC433DF for ; Mon, 12 Oct 2020 20:39:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1703D20FC3 for ; Mon, 12 Oct 2020 20:39:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730459AbgJLUj0 (ORCPT ); Mon, 12 Oct 2020 16:39:26 -0400 Received: from mga09.intel.com ([134.134.136.24]:33953 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730088AbgJLUjX (ORCPT ); Mon, 12 Oct 2020 16:39:23 -0400 IronPort-SDR: Xo2nFey3OAD0otS9lnf6QUr+2rjxMym2vQ4CUaZmd3qwUtOOJ0pWSZf4yXc7ZRFVFS62YTA3WP 7RxYpv0p3SkQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913107" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913107" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:23 -0700 IronPort-SDR: ccNyrkBhKwtkdeNDSQr2HkCYryE33CNWWDPF0qIHAcldAfPEvTf6aZMqrjvSWWkG5K25Z+D77S wJtYQ1tE4nXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328179" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:21 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 11/31] crypto: qat - use admin mask to send fw constants Date: Mon, 12 Oct 2020 21:38:27 +0100 Message-Id: <20201012203847.340030-12-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce admin AE mask. If this mask set, the fw constant message is sent only to engines that belong to that set, otherwise it is sent to all engines. This is in preparation for the qat_4xxx driver where the constant message should be sent only to admin engines. In GEN2 devices (c62x, c3xxx and dh895xcc), the admin AE mask is 0 and the fw constants message is sent to all AEs. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_accel_devices.h | 1 + drivers/crypto/qat/qat_common/adf_admin.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 5f57850c2e8d..779f62fde3bd 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -171,6 +171,7 @@ struct adf_hw_device_data { u32 instance_id; u16 accel_mask; u16 ae_mask; + u32 admin_ae_mask; u16 tx_rings_mask; u8 tx_rx_gap; u8 num_banks; diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c index 6d94746d266f..dcd580d2afe2 100644 --- a/drivers/crypto/qat/qat_common/adf_admin.c +++ b/drivers/crypto/qat/qat_common/adf_admin.c @@ -182,7 +182,7 @@ static int adf_set_fw_constants(struct adf_accel_dev *accel_dev) struct icp_qat_fw_init_admin_req req; struct icp_qat_fw_init_admin_resp resp; struct adf_hw_device_data *hw_device = accel_dev->hw_device; - u32 ae_mask = hw_device->ae_mask; + u32 ae_mask = hw_device->admin_ae_mask ?: hw_device->ae_mask; memset(&req, 0, sizeof(req)); memset(&resp, 0, sizeof(resp)); From patchwork Mon Oct 12 20:38:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B6C5C433E7 for ; Mon, 12 Oct 2020 20:39:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E77D820BED for ; Mon, 12 Oct 2020 20:39:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730629AbgJLUj0 (ORCPT ); Mon, 12 Oct 2020 16:39:26 -0400 Received: from mga09.intel.com ([134.134.136.24]:33991 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730459AbgJLUj0 (ORCPT ); Mon, 12 Oct 2020 16:39:26 -0400 IronPort-SDR: cMpvYsBPLiWdcUAMv/fs+eWoUL9508dcpNmi1y5FqBzbIoVMkVenXQBtHLHfBnrMTgSnKrlVXo xplsh5eiK/Rg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913112" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913112" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:25 -0700 IronPort-SDR: f9ghzyfiHeNImRmLAmKWZUHEcmYHfZ77xv8m/A6av95X+peLq6GceFR2wccOtG7Guszbo/++TZ 2LSOQ5wJxjuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328185" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:23 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 12/31] crypto: qat - update constants table Date: Mon, 12 Oct 2020 21:38:28 +0100 Message-Id: <20201012203847.340030-13-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Extend admin contansts table to support QAT GEN4 devices. This change does not affect QAT GEN2 devices (c62x, c3xxx and dh895xcc) as the table was extended in an unused area which is not referenced by any of those drivers and devices. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_admin.c | 44 +++++++++++------------ 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c index dcd580d2afe2..7c2ca54229aa 100644 --- a/drivers/crypto/qat/qat_common/adf_admin.c +++ b/drivers/crypto/qat/qat_common/adf_admin.c @@ -66,28 +66,28 @@ static const u8 const_tab[1024] __aligned(1024) = { 0xf8, 0x2b, 0xa5, 0x4f, 0xf5, 0x3a, 0x5f, 0x1d, 0x36, 0xf1, 0x51, 0x0e, 0x52, 0x7f, 0xad, 0xe6, 0x82, 0xd1, 0x9b, 0x05, 0x68, 0x8c, 0x2b, 0x3e, 0x6c, 0x1f, 0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, 0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, -0x7e, 0x21, 0x79, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, -0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x7e, 0x21, 0x79, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x18, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x01, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x15, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x02, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x14, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x02, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x24, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x25, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x24, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x25, +0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x12, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x43, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x43, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x45, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x01, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x44, 0x01, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2B, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x2B, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x00, 0x00, 0x00, +0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, From patchwork Mon Oct 12 20:38:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEBEAC433E7 for ; Mon, 12 Oct 2020 20:39:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A272620FC3 for ; Mon, 12 Oct 2020 20:39:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730637AbgJLUja (ORCPT ); Mon, 12 Oct 2020 16:39:30 -0400 Received: from mga09.intel.com ([134.134.136.24]:34003 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730088AbgJLUj2 (ORCPT ); Mon, 12 Oct 2020 16:39:28 -0400 IronPort-SDR: VmaZI0JsA+B45EYI9mdVK4rM61uu2PdyKt21buKO6HkTE+cbc582euWfsFeMMLUHS/w5ko4bnH uFmIPsOLGs4w== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913117" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913117" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:27 -0700 IronPort-SDR: /xaw5n6BIQduLJeBproiv9WHMzmKTKvE2jj0VmVekMKvCpUCP0WfJOyAW/O+4LT4BhpKE3r7Px bD9BjdF2T1JA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328189" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:25 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 13/31] crypto: qat - remove writes into WQCFG Date: Mon, 12 Oct 2020 21:38:29 +0100 Message-Id: <20201012203847.340030-14-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org WQCFG registers contain the correct values after reset in all generations of QAT. No need to write into them. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_hw_arbiter.c | 13 ------------- 1 file changed, 13 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c index d4162783f970..cbb9f0b8ff74 100644 --- a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c @@ -11,7 +11,6 @@ #define ADF_ARB_REG_SLOT 0x1000 #define ADF_ARB_WTR_OFFSET 0x010 #define ADF_ARB_RO_EN_OFFSET 0x090 -#define ADF_ARB_WQCFG_OFFSET 0x100 #define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 #define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C @@ -28,10 +27,6 @@ ADF_ARB_WRK_2_SER_MAP_OFFSET) + \ (ADF_ARB_REG_SIZE * index), value) -#define WRITE_CSR_ARB_WQCFG(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, (ADF_ARB_OFFSET + \ - ADF_ARB_WQCFG_OFFSET) + (ADF_ARB_REG_SIZE * index), value) - int adf_init_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; @@ -45,10 +40,6 @@ int adf_init_arb(struct adf_accel_dev *accel_dev) for (arb = 0; arb < ADF_ARB_NUM; arb++) WRITE_CSR_ARB_SARCONFIG(csr, arb, arb_cfg); - /* Setup worker queue registers */ - for (i = 0; i < hw_data->num_engines; i++) - WRITE_CSR_ARB_WQCFG(csr, i, i); - /* Map worker threads to service arbiters */ hw_data->get_arb_mapping(accel_dev, &thd_2_arb_cfg); @@ -84,10 +75,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev) for (i = 0; i < ADF_ARB_NUM; i++) WRITE_CSR_ARB_SARCONFIG(csr, i, 0); - /* Shutdown work queue */ - for (i = 0; i < hw_data->num_engines; i++) - WRITE_CSR_ARB_WQCFG(csr, i, 0); - /* Unmap worker threads to service arbiters */ for (i = 0; i < hw_data->num_engines; i++) WRITE_CSR_ARB_WRK_2_SER_MAP(csr, i, 0); From patchwork Mon Oct 12 20:38:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FCD0C433DF for ; Mon, 12 Oct 2020 20:39:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D1CF12145D for ; Mon, 12 Oct 2020 20:39:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730088AbgJLUja (ORCPT ); Mon, 12 Oct 2020 16:39:30 -0400 Received: from mga09.intel.com ([134.134.136.24]:34011 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730636AbgJLUj3 (ORCPT ); Mon, 12 Oct 2020 16:39:29 -0400 IronPort-SDR: ldapMlVdusreSCSIBHGuDeGqFVjxkq6GjDJhZC302EiXVSAX4P2FpWzCRjVeudzAkUAlMLYpzt u21mEEzWiCQw== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913125" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913125" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:29 -0700 IronPort-SDR: zep0n+mu2xgZJzxpzKTIr/8IoQBacU2tQ18CguSQCZ1BOsLZQNSC7NggBwEZkjKiWGuby3pOlM 9CXM+pPjv5GQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328192" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:27 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 14/31] crypto: qat - remove unused macros in arbiter module Date: Mon, 12 Oct 2020 21:38:30 +0100 Message-Id: <20201012203847.340030-15-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove the unused macros ADF_ARB_WTR_SIZE, ADF_ARB_WTR_OFFSET and ADF_ARB_RO_EN_OFFSET. These macros were left in commit 34074205bb9f ("crypto: qat - remove redundant arbiter configuration") that removed the logic that used those defines. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_hw_arbiter.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c index cbb9f0b8ff74..be2fd264a223 100644 --- a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c @@ -6,11 +6,8 @@ #define ADF_ARB_NUM 4 #define ADF_ARB_REG_SIZE 0x4 -#define ADF_ARB_WTR_SIZE 0x20 #define ADF_ARB_OFFSET 0x30000 #define ADF_ARB_REG_SLOT 0x1000 -#define ADF_ARB_WTR_OFFSET 0x010 -#define ADF_ARB_RO_EN_OFFSET 0x090 #define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 #define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C From patchwork Mon Oct 12 20:38:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68174C433DF for ; Mon, 12 Oct 2020 20:39:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18C0920BED for ; Mon, 12 Oct 2020 20:39:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730649AbgJLUjc (ORCPT ); Mon, 12 Oct 2020 16:39:32 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730636AbgJLUjc (ORCPT ); Mon, 12 Oct 2020 16:39:32 -0400 IronPort-SDR: vGf9IAU8OK7GD+7I0kujOpZLOt3J2WDCEP+Z0Wf6yO3IVSp+HmKaM8xJQSKXW74Awq4yG9CMYr fsWQiP+kcjOQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913128" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913128" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:31 -0700 IronPort-SDR: /VJ5IA95zBDPU4VtNW5zco477ngdjfKUOwNzcGcAKrhaPy2/oGhEZUt/KTqhLg2kbouDofZOTT /qKfwxmfHG7g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328198" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:29 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Fiona Trahe , Wojciech Ziemba , Andy Shevchenko Subject: [PATCH 15/31] crypto: qat - abstract arbiter access Date: Mon, 12 Oct 2020 21:38:31 +0100 Message-Id: <20201012203847.340030-16-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arbiter configuration, the offset to the arbiter config CSR and the offset to the worker thread to service arbiter CSR are going to be different in QAT GEN4 devices although the logic that uses them is the same across all QAT generations. This patch reworks the gen-specific parts of the arbiter access code by introducing the arb_info structure, that contains the values that are generation specific, and a function in the structure adf_hw_device_data, get_arb_info(), that allows to get them. Since the arbiter values for QAT GEN2 devices (c62x, c3xxx and dh895xcc) are the same, a single function, adf_gen2_get_arb_info() is provided in adf_gen2_hw_data.c and referenced by each QAT GEN2 driver. Signed-off-by: Giovanni Cabiddu Reviewed-by: Fiona Trahe Reviewed-by: Wojciech Ziemba Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 + .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 1 + .../crypto/qat/qat_common/adf_accel_devices.h | 7 ++++ .../crypto/qat/qat_common/adf_gen2_hw_data.c | 8 ++++ .../crypto/qat/qat_common/adf_gen2_hw_data.h | 6 +++ .../crypto/qat/qat_common/adf_hw_arbiter.c | 41 ++++++++++++------- .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 + 7 files changed, 50 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index f72ed415800e..f3f33dc0c316 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -203,6 +203,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; hw_data->get_admin_info = adf_gen2_get_admin_info; + hw_data->get_arb_info = adf_gen2_get_arb_info; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_C3XXX_FW; hw_data->fw_mmp_name = ADF_C3XXX_MMP; diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index d4443523dc9d..53c03b2f763f 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -213,6 +213,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; hw_data->get_admin_info = adf_gen2_get_admin_info; + hw_data->get_arb_info = adf_gen2_get_arb_info; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_C62X_FW; hw_data->fw_mmp_name = ADF_C62X_MMP; diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 779f62fde3bd..951072feb176 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -97,6 +97,12 @@ struct adf_hw_device_class { u32 instances; } __packed; +struct arb_info { + u32 arb_cfg; + u32 arb_offset; + u32 wt2sam_offset; +}; + struct admin_info { u32 admin_msg_ur; u32 admin_msg_lr; @@ -144,6 +150,7 @@ struct adf_hw_device_data { u32 (*get_num_accels)(struct adf_hw_device_data *self); u32 (*get_pf2vf_offset)(u32 i); u32 (*get_vintmsk_offset)(u32 i); + void (*get_arb_info)(struct arb_info *arb_csrs_info); void (*get_admin_info)(struct admin_info *admin_csrs_info); enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self); int (*alloc_irq)(struct adf_accel_dev *accel_dev); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index 15a0bc921d7e..b2f770cc29d8 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -45,6 +45,14 @@ void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info) } EXPORT_SYMBOL_GPL(adf_gen2_get_admin_info); +void adf_gen2_get_arb_info(struct arb_info *arb_info) +{ + arb_info->arb_cfg = ADF_ARB_CONFIG; + arb_info->arb_offset = ADF_ARB_OFFSET; + arb_info->wt2sam_offset = ADF_ARB_WRK_2_SER_MAP_OFFSET; +} +EXPORT_SYMBOL_GPL(adf_gen2_get_arb_info); + static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring) { return READ_CSR_RING_HEAD(csr_base_addr, bank, ring); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index e9d2591b2be8..fe4ea3220bca 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -97,9 +97,15 @@ do { \ #define ADF_ADMINMSGLR_OFFSET (0x3A000 + 0x578) #define ADF_MAILBOX_BASE_OFFSET 0x20970 +/* Arbiter configuration */ +#define ADF_ARB_OFFSET 0x30000 +#define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 +#define ADF_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0)) + void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs); void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info); +void adf_gen2_get_arb_info(struct arb_info *arb_info); #endif diff --git a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c index be2fd264a223..9dc9d58f6093 100644 --- a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c @@ -6,36 +6,39 @@ #define ADF_ARB_NUM 4 #define ADF_ARB_REG_SIZE 0x4 -#define ADF_ARB_OFFSET 0x30000 #define ADF_ARB_REG_SLOT 0x1000 -#define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 #define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C #define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ (ADF_ARB_REG_SLOT * index), value) -#define WRITE_CSR_ARB_SARCONFIG(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, ADF_ARB_OFFSET + \ - (ADF_ARB_REG_SIZE * index), value) +#define WRITE_CSR_ARB_SARCONFIG(csr_addr, arb_offset, index, value) \ + ADF_CSR_WR(csr_addr, (arb_offset) + \ + (ADF_ARB_REG_SIZE * (index)), value) -#define WRITE_CSR_ARB_WRK_2_SER_MAP(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, (ADF_ARB_OFFSET + \ - ADF_ARB_WRK_2_SER_MAP_OFFSET) + \ - (ADF_ARB_REG_SIZE * index), value) +#define WRITE_CSR_ARB_WT2SAM(csr_addr, arb_offset, wt_offset, index, value) \ + ADF_CSR_WR(csr_addr, ((arb_offset) + (wt_offset)) + \ + (ADF_ARB_REG_SIZE * (index)), value) int adf_init_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; void __iomem *csr = accel_dev->transport->banks[0].csr_addr; - u32 arb_cfg = 0x1 << 31 | 0x4 << 4 | 0x1; - u32 arb, i; + u32 arb_off, wt_off, arb_cfg; const u32 *thd_2_arb_cfg; + struct arb_info info; + int arb, i; + + hw_data->get_arb_info(&info); + arb_cfg = info.arb_cfg; + arb_off = info.arb_offset; + wt_off = info.wt2sam_offset; /* Service arb configured for 32 bytes responses and * ring flow control check enabled. */ for (arb = 0; arb < ADF_ARB_NUM; arb++) - WRITE_CSR_ARB_SARCONFIG(csr, arb, arb_cfg); + WRITE_CSR_ARB_SARCONFIG(csr, arb_off, arb, arb_cfg); /* Map worker threads to service arbiters */ hw_data->get_arb_mapping(accel_dev, &thd_2_arb_cfg); @@ -44,7 +47,7 @@ int adf_init_arb(struct adf_accel_dev *accel_dev) return -EFAULT; for (i = 0; i < hw_data->num_engines; i++) - WRITE_CSR_ARB_WRK_2_SER_MAP(csr, i, *(thd_2_arb_cfg + i)); + WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, thd_2_arb_cfg[i]); return 0; } @@ -60,21 +63,29 @@ void adf_update_ring_arb(struct adf_etr_ring_data *ring) void adf_exit_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 arb_off, wt_off; + struct arb_info info; void __iomem *csr; unsigned int i; + hw_data->get_arb_info(&info); + arb_off = info.arb_offset; + wt_off = info.wt2sam_offset; + if (!accel_dev->transport) return; csr = accel_dev->transport->banks[0].csr_addr; + hw_data->get_arb_info(&info); + /* Reset arbiter configuration */ for (i = 0; i < ADF_ARB_NUM; i++) - WRITE_CSR_ARB_SARCONFIG(csr, i, 0); + WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0); /* Unmap worker threads to service arbiters */ for (i = 0; i < hw_data->num_engines; i++) - WRITE_CSR_ARB_WRK_2_SER_MAP(csr, i, 0); + WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0); /* Disable arbitration on all rings */ for (i = 0; i < GET_MAX_BANKS(accel_dev); i++) diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index c568e9808cec..2e7017a3ad46 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -211,6 +211,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->get_pf2vf_offset = get_pf2vf_offset; hw_data->get_vintmsk_offset = get_vintmsk_offset; hw_data->get_admin_info = adf_gen2_get_admin_info; + hw_data->get_arb_info = adf_gen2_get_arb_info; hw_data->get_sram_bar_id = get_sram_bar_id; hw_data->get_sku = get_sku; hw_data->fw_name = ADF_DH895XCC_FW; From patchwork Mon Oct 12 20:38:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E2E3C43457 for ; Mon, 12 Oct 2020 20:39:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E896208D5 for ; Mon, 12 Oct 2020 20:39:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730658AbgJLUje (ORCPT ); Mon, 12 Oct 2020 16:39:34 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730636AbgJLUje (ORCPT ); Mon, 12 Oct 2020 16:39:34 -0400 IronPort-SDR: ijrhTwIbkl8w0R4of9aI27psf2nCg22v3J20tfM2QvtRNwbEBTX9fDgtFBtZTvpErAnulqamP9 Rq1k3XmdR16g== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913140" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913140" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:33 -0700 IronPort-SDR: QJmlpHjDpLIlS6uweWah16HJq7F+3xo4uqQm7CCUH1ZGIWnUYJMLuQANr32dZKzyK8TFe1+xoD 0yhLZ9SM958Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328206" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:31 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Marco Chiappero , Giovanni Cabiddu , Wojciech Ziemba , Fiona Trahe , Andy Shevchenko Subject: [PATCH 16/31] crypto: qat - add support for capability detection Date: Mon, 12 Oct 2020 21:38:32 +0100 Message-Id: <20201012203847.340030-17-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Marco Chiappero Add logic to detect device capabilities for c62x, c3xxx and dh895xcc. Read fuses, straps and legfuses CSRs and build the device capabilities mask. This will be used to understand if a certain service is supported by a device. This patch is based on earlier work done by Conor McLoughlin. Signed-off-by: Marco Chiappero Co-developed-by: Giovanni Cabiddu Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 2 ++ drivers/crypto/qat/qat_c3xxx/adf_drv.c | 5 ++-- .../crypto/qat/qat_c62x/adf_c62x_hw_data.c | 2 ++ drivers/crypto/qat/qat_c62x/adf_drv.c | 5 ++-- .../crypto/qat/qat_common/adf_accel_devices.h | 1 + .../crypto/qat/qat_common/adf_gen2_hw_data.c | 30 +++++++++++++++++++ .../crypto/qat/qat_common/adf_gen2_hw_data.h | 4 +++ drivers/crypto/qat/qat_common/icp_qat_hw.h | 23 ++++++++++++++ .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 25 ++++++++++++++++ drivers/crypto/qat/qat_dh895xcc/adf_drv.c | 5 ++-- 10 files changed, 93 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c index f3f33dc0c316..eb45f1b1ae3e 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -5,6 +5,7 @@ #include #include #include "adf_c3xxx_hw_data.h" +#include "icp_qat_hw.h" /* Worker thread to service arbiter mappings based on dev SKUs */ static const u32 thrd_to_arb_map_6_me_sku[] = { @@ -195,6 +196,7 @@ void adf_init_hw_data_c3xxx(struct adf_hw_device_data *hw_data) hw_data->enable_error_correction = adf_enable_error_correction; hw_data->get_accel_mask = get_accel_mask; hw_data->get_ae_mask = get_ae_mask; + hw_data->get_accel_cap = adf_gen2_get_accel_cap; hw_data->get_num_accels = get_num_accels; hw_data->get_num_aes = get_num_aes; hw_data->get_sram_bar_id = get_sram_bar_id; diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c index da6e88026988..7fb3343ae8b0 100644 --- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c +++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c @@ -177,9 +177,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto out_err_disable; } - /* Read accelerator capabilities mask */ - pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, - &hw_data->accel_capabilities_mask); + /* Get accelerator capabilities mask */ + hw_data->accel_capabilities_mask = hw_data->get_accel_cap(accel_dev); /* Find and map all the device's BARS */ i = 0; diff --git a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c index 53c03b2f763f..babdffbcb846 100644 --- a/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/qat/qat_c62x/adf_c62x_hw_data.c @@ -5,6 +5,7 @@ #include #include #include "adf_c62x_hw_data.h" +#include "icp_qat_hw.h" /* Worker thread to service arbiter mappings based on dev SKUs */ static const u32 thrd_to_arb_map_8_me_sku[] = { @@ -205,6 +206,7 @@ void adf_init_hw_data_c62x(struct adf_hw_device_data *hw_data) hw_data->enable_error_correction = adf_enable_error_correction; hw_data->get_accel_mask = get_accel_mask; hw_data->get_ae_mask = get_ae_mask; + hw_data->get_accel_cap = adf_gen2_get_accel_cap; hw_data->get_num_accels = get_num_accels; hw_data->get_num_aes = get_num_aes; hw_data->get_sram_bar_id = get_sram_bar_id; diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c index 3da697a566ad..1f5de442e1e6 100644 --- a/drivers/crypto/qat/qat_c62x/adf_drv.c +++ b/drivers/crypto/qat/qat_c62x/adf_drv.c @@ -177,9 +177,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto out_err_disable; } - /* Read accelerator capabilities mask */ - pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, - &hw_data->accel_capabilities_mask); + /* Get accelerator capabilities mask */ + hw_data->accel_capabilities_mask = hw_data->get_accel_cap(accel_dev); /* Find and map all the device's BARS */ i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0; diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 951072feb176..692e39e5e878 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -143,6 +143,7 @@ struct adf_hw_device_data { struct adf_hw_device_class *dev_class; u32 (*get_accel_mask)(struct adf_hw_device_data *self); u32 (*get_ae_mask)(struct adf_hw_device_data *self); + u32 (*get_accel_cap)(struct adf_accel_dev *accel_dev); u32 (*get_sram_bar_id)(struct adf_hw_device_data *self); u32 (*get_misc_bar_id)(struct adf_hw_device_data *self); u32 (*get_etr_bar_id)(struct adf_hw_device_data *self); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index b2f770cc29d8..d5560e714167 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2020 Intel Corporation */ #include "adf_gen2_hw_data.h" +#include "icp_qat_hw.h" +#include void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs) @@ -136,3 +138,31 @@ void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col; } EXPORT_SYMBOL_GPL(adf_gen2_init_hw_csr_ops); + +u32 adf_gen2_get_accel_cap(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct pci_dev *pdev = accel_dev->accel_pci_dev.pci_dev; + u32 straps = hw_data->straps; + u32 fuses = hw_data->fuses; + u32 legfuses; + u32 capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + + /* Read accelerator capabilities mask */ + pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, &legfuses); + + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + + if ((straps | fuses) & ADF_POWERGATE_PKE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + + return capabilities; +} +EXPORT_SYMBOL_GPL(adf_gen2_get_accel_cap); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index fe4ea3220bca..6c860aedb301 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -102,10 +102,14 @@ do { \ #define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 #define ADF_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0)) +/* Power gating */ +#define ADF_POWERGATE_PKE BIT(24) + void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable, int num_a_regs, int num_b_regs); void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info); void adf_gen2_get_arb_info(struct arb_info *arb_info); +u32 adf_gen2_get_accel_cap(struct adf_accel_dev *accel_dev); #endif diff --git a/drivers/crypto/qat/qat_common/icp_qat_hw.h b/drivers/crypto/qat/qat_common/icp_qat_hw.h index c4b6ef1506ab..4aa5d724e11b 100644 --- a/drivers/crypto/qat/qat_common/icp_qat_hw.h +++ b/drivers/crypto/qat/qat_common/icp_qat_hw.h @@ -65,6 +65,29 @@ struct icp_qat_hw_auth_config { __u32 reserved; }; +enum icp_qat_slice_mask { + ICP_ACCEL_MASK_CIPHER_SLICE = BIT(0), + ICP_ACCEL_MASK_AUTH_SLICE = BIT(1), + ICP_ACCEL_MASK_PKE_SLICE = BIT(2), + ICP_ACCEL_MASK_COMPRESS_SLICE = BIT(3), + ICP_ACCEL_MASK_LZS_SLICE = BIT(4), + ICP_ACCEL_MASK_EIA3_SLICE = BIT(5), + ICP_ACCEL_MASK_SHA3_SLICE = BIT(6), +}; + +enum icp_qat_capabilities_mask { + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = BIT(0), + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = BIT(1), + ICP_ACCEL_CAPABILITIES_CIPHER = BIT(2), + ICP_ACCEL_CAPABILITIES_AUTHENTICATION = BIT(3), + ICP_ACCEL_CAPABILITIES_RESERVED_1 = BIT(4), + ICP_ACCEL_CAPABILITIES_COMPRESSION = BIT(5), + ICP_ACCEL_CAPABILITIES_LZS_COMPRESSION = BIT(6), + ICP_ACCEL_CAPABILITIES_RAND = BIT(7), + ICP_ACCEL_CAPABILITIES_ZUC = BIT(8), + ICP_ACCEL_CAPABILITIES_SHA3 = BIT(9), +}; + #define QAT_AUTH_MODE_BITPOS 4 #define QAT_AUTH_MODE_MASK 0xF #define QAT_AUTH_ALGO_BITPOS 0 diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 2e7017a3ad46..7970ebb67f28 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -5,6 +5,7 @@ #include #include #include "adf_dh895xcc_hw_data.h" +#include "icp_qat_hw.h" /* Worker thread to service arbiter mappings based on dev SKUs */ static const u32 thrd_to_arb_map_sku4[] = { @@ -83,6 +84,29 @@ static u32 get_sram_bar_id(struct adf_hw_device_data *self) return ADF_DH895XCC_SRAM_BAR; } +static u32 get_accel_cap(struct adf_accel_dev *accel_dev) +{ + struct pci_dev *pdev = accel_dev->accel_pci_dev.pci_dev; + u32 capabilities; + u32 legfuses; + + capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC | + ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + + /* Read accelerator capabilities mask */ + pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, &legfuses); + + if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + if (legfuses & ICP_ACCEL_MASK_PKE_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC; + if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) + capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + + return capabilities; +} + static enum dev_sku_info get_sku(struct adf_hw_device_data *self) { int sku = (self->fuses & ADF_DH895XCC_FUSECTL_SKU_MASK) @@ -204,6 +228,7 @@ void adf_init_hw_data_dh895xcc(struct adf_hw_device_data *hw_data) hw_data->enable_error_correction = adf_enable_error_correction; hw_data->get_accel_mask = get_accel_mask; hw_data->get_ae_mask = get_ae_mask; + hw_data->get_accel_cap = get_accel_cap; hw_data->get_num_accels = get_num_accels; hw_data->get_num_aes = get_num_aes; hw_data->get_etr_bar_id = get_etr_bar_id; diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c index d7941bc2bafd..a9ec4357144c 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c @@ -177,9 +177,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto out_err_disable; } - /* Read accelerator capabilities mask */ - pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, - &hw_data->accel_capabilities_mask); + /* Get accelerator capabilities mask */ + hw_data->accel_capabilities_mask = hw_data->get_accel_cap(accel_dev); /* Find and map all the device's BARS */ i = 0; From patchwork Mon Oct 12 20:38:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEBB2C433E7 for ; Mon, 12 Oct 2020 20:39:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FFED2145D for ; Mon, 12 Oct 2020 20:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730664AbgJLUjf (ORCPT ); Mon, 12 Oct 2020 16:39:35 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730636AbgJLUjf (ORCPT ); Mon, 12 Oct 2020 16:39:35 -0400 IronPort-SDR: ReHo/Mu8KFQEeFlIrhOeMca7QUKDK93Y4saPXr16IAHQ6nOY/1NRd+6E8s3bP/aRfMP3RB+QMm hQ2jpI28YNsQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913143" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913143" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:35 -0700 IronPort-SDR: /N/o9DxB8r8bdZKCK4ZMxDOdLBC6Qiboox3szO14i7vgMy3GxOB2LBczPheQgp6/TDfVa5jbtf PfRdt7JxzRqQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328210" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:33 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Fiona Trahe , Andy Shevchenko Subject: [PATCH 17/31] crypto: qat - register crypto instances based on capability Date: Mon, 12 Oct 2020 21:38:33 +0100 Message-Id: <20201012203847.340030-18-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce the function adf_hw_dev_has_crypto() that returns true if a device supports symmetric crypto, asymmetric crypto and authentication services. If a device has crypto capabilities, add crypto instances to the configuration. This is done since the function that allows to retrieve crypto instances, qat_crypto_get_instance_node(), return instances that support all crypto services. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/qat_crypto.c | 7 ++++++- drivers/crypto/qat/qat_common/qat_crypto.h | 15 +++++++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index ab621b7dbd20..089d5d7b738e 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -117,10 +117,15 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev) { int cpus = num_online_cpus(); int banks = GET_MAX_BANKS(accel_dev); - int instances = min(cpus, banks); char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; int i; unsigned long val; + int instances; + + if (adf_hw_dev_has_crypto(accel_dev)) + instances = min(cpus, banks); + else + instances = 0; if (adf_cfg_section_add(accel_dev, ADF_KERNEL_SEC)) goto err; diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index 8d11e94cbf08..b6a4c95ae003 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -55,4 +55,19 @@ struct qat_crypto_request { bool encryption; }; +static inline bool adf_hw_dev_has_crypto(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_device = accel_dev->hw_device; + u32 mask = ~hw_device->accel_capabilities_mask; + + if (mask & ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC) + return false; + if (mask & ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC) + return false; + if (mask & ADF_ACCEL_CAPABILITIES_AUTHENTICATION) + return false; + + return true; +} + #endif From patchwork Mon Oct 12 20:38:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3012DC43457 for ; Mon, 12 Oct 2020 20:39:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDD4B2145D for ; Mon, 12 Oct 2020 20:39:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730674AbgJLUjh (ORCPT ); Mon, 12 Oct 2020 16:39:37 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730636AbgJLUjh (ORCPT ); Mon, 12 Oct 2020 16:39:37 -0400 IronPort-SDR: 2ZaON1vbzZv1dI5nfrVnpWIvSdzduKzvT65ptWAQbdf7e+NWNq0xYLQKl1yjaDwmQeraWuwk+E /qSJyhbmvx8Q== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913145" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913145" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:37 -0700 IronPort-SDR: h+bsIcJOn26VYFu3VUcUYUa1tYBf9ZwwxxHA+IjhHVeAXmTjHe4TshKQsv9E7W1s6IE8D7u5c5 TSmS3VY/oqIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328218" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:35 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Maksim Lukoshkov , Fiona Trahe , Andy Shevchenko Subject: [PATCH 18/31] crypto: qat - enable ring after pair is programmed Date: Mon, 12 Oct 2020 21:38:34 +0100 Message-Id: <20201012203847.340030-19-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Enable arbitration on the TX ring only after the RX ring is programmed. Before this change, arbitration was enabled on the TX ring before the RX ring was programmed allowing the HW to process a request before having the ring pair configured. With this change, the arbitration logic is programmed only if the TX half of the ring mask matches the RX half. This change does not affect QAT GEN2 devices (c62x, c3xxx and dh895xcc), but it is a must for QAT GEN4 devices since the CSRs of the ring pair are locked after arbitration is enabled on the TX ring. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Maksim Lukoshkov Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_common/adf_hw_arbiter.c | 20 ++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c index 9dc9d58f6093..bd03c8f54eb4 100644 --- a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c @@ -55,9 +55,27 @@ EXPORT_SYMBOL_GPL(adf_init_arb); void adf_update_ring_arb(struct adf_etr_ring_data *ring) { + struct adf_accel_dev *accel_dev = ring->bank->accel_dev; + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + u32 tx_ring_mask = hw_data->tx_rings_mask; + u32 shift = hw_data->tx_rx_gap; + u32 arben, arben_tx, arben_rx; + u32 rx_ring_mask; + + /* + * Enable arbitration on a ring only if the TX half of the ring mask + * matches the RX part. This results in writes to CSR on both TX and + * RX update - only one is necessary, but both are done for + * simplicity. + */ + rx_ring_mask = tx_ring_mask << shift; + arben_tx = (ring->bank->ring_mask & tx_ring_mask) >> 0; + arben_rx = (ring->bank->ring_mask & rx_ring_mask) >> shift; + arben = arben_tx & arben_rx; + WRITE_CSR_ARB_RINGSRVARBEN(ring->bank->csr_addr, ring->bank->bank_number, - ring->bank->ring_mask & 0xFF); + arben); } void adf_exit_arb(struct adf_accel_dev *accel_dev) From patchwork Mon Oct 12 20:38:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C0B8C433DF for ; Mon, 12 Oct 2020 20:39:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 58ABD20FC3 for ; Mon, 12 Oct 2020 20:39:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387785AbgJLUjk (ORCPT ); Mon, 12 Oct 2020 16:39:40 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387711AbgJLUjj (ORCPT ); Mon, 12 Oct 2020 16:39:39 -0400 IronPort-SDR: zlwA/5mxPgAn7Hy9kfFKOBcd9d1PyezveiHeB/L5McV34hjKcaSbIHirLjDOn8++pmKEuUjAZa yrWLNN0Qi1hQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913151" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913151" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:39 -0700 IronPort-SDR: hlXmNHP9pBadaZIBhVCQqVQUrRyUfOjuuu1o2fiGj4EmakEFzT/NSaGgMU43fcQE4T3cVQGpZY MhjhQgQoYzPA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328223" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:37 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Maksim Lukoshkov , Fiona Trahe , Andy Shevchenko Subject: [PATCH 19/31] crypto: qat - abstract build ring base Date: Mon, 12 Oct 2020 21:38:35 +0100 Message-Id: <20201012203847.340030-20-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Abstract the implementation of BUILD_RING_BASE_ADDR. This is in preparation for the introduction of the qat_4xxx driver since the value of the ring base differs between QAT GEN2 and QAT GEN4 devices. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Maksim Lukoshkov Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_accel_devices.h | 1 + drivers/crypto/qat/qat_common/adf_gen2_hw_data.c | 6 ++++++ drivers/crypto/qat/qat_common/adf_gen2_hw_data.h | 2 ++ drivers/crypto/qat/qat_common/adf_transport.c | 4 +++- drivers/crypto/qat/qat_common/adf_transport_access_macros.h | 2 -- 5 files changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 692e39e5e878..1fd32c56b119 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -110,6 +110,7 @@ struct admin_info { }; struct adf_hw_csr_ops { + u64 (*build_csr_ring_base_addr)(dma_addr_t addr, u32 size); u32 (*read_csr_ring_head)(void __iomem *csr_base_addr, u32 bank, u32 ring); void (*write_csr_ring_head)(void __iomem *csr_base_addr, u32 bank, diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index d5560e714167..5de359165ab4 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -55,6 +55,11 @@ void adf_gen2_get_arb_info(struct arb_info *arb_info) } EXPORT_SYMBOL_GPL(adf_gen2_get_arb_info); +static u64 build_csr_ring_base_addr(dma_addr_t addr, u32 size) +{ + return BUILD_RING_BASE_ADDR(addr, size); +} + static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring) { return READ_CSR_RING_HEAD(csr_base_addr, bank, ring); @@ -124,6 +129,7 @@ static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank, void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) { + csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr; csr_ops->read_csr_ring_head = read_csr_ring_head; csr_ops->write_csr_ring_head = write_csr_ring_head; csr_ops->read_csr_ring_tail = read_csr_ring_tail; diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index 6c860aedb301..212ff395201f 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -23,6 +23,8 @@ #define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000 #define ADF_RING_BUNDLE_SIZE 0x1000 +#define BUILD_RING_BASE_ADDR(addr, size) \ + (((addr) >> 6) & (0xFFFFFFFFFFFFFFFFULL << (size))) #define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ ADF_RING_CSR_RING_HEAD + ((ring) << 2)) diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c index 03fb7812818b..dd8f94fcb9a8 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.c +++ b/drivers/crypto/qat/qat_common/adf_transport.c @@ -180,7 +180,9 @@ static int adf_init_ring(struct adf_etr_ring_data *ring) else adf_configure_rx_ring(ring); - ring_base = BUILD_RING_BASE_ADDR(ring->dma_addr, ring->ring_size); + ring_base = csr_ops->build_csr_ring_base_addr(ring->dma_addr, + ring->ring_size); + csr_ops->write_csr_ring_base(ring->bank->csr_addr, ring->bank->bank_number, ring->ring_number, ring_base); diff --git a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h index 4642b0b5cfb0..12b1605a740e 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h +++ b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h @@ -56,6 +56,4 @@ ((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM) \ | (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) \ | size) -#define BUILD_RING_BASE_ADDR(addr, size) \ - ((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size)) #endif From patchwork Mon Oct 12 20:38:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BDE1C43467 for ; Mon, 12 Oct 2020 20:39:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC58820FC3 for ; Mon, 12 Oct 2020 20:39:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387823AbgJLUjl (ORCPT ); Mon, 12 Oct 2020 16:39:41 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387766AbgJLUjk (ORCPT ); Mon, 12 Oct 2020 16:39:40 -0400 IronPort-SDR: CcTt7/7Qn5eHpyZ1U6qzgZUyVHc9ilr7Zpjbd6GUzcYcLDqxAXDgmKtzMutVJ/uMjcCShyH1jj MJ/AkjUcddbg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913157" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913157" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:40 -0700 IronPort-SDR: n3m6Xdo0rHR7Tw3lfcwF0PvmRUQT/NyStfFTyOr+QL8feyvJCMtD3KX56TooXAevgWZLyo7OoC 0svJbe/vCs8A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328227" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:39 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 20/31] crypto: qat - replace constant masks with GENMASK Date: Mon, 12 Oct 2020 21:38:36 +0100 Message-Id: <20201012203847.340030-21-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Replace constant 0xFFFFFFFFFFFFFFFFULL with GENMASK_ULL(63, 0) and 0xFFFFFFFF with GENMASK(31, 0) as they are masks. This makes code less error prone. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_gen2_hw_data.h | 2 +- drivers/crypto/qat/qat_common/adf_sriov.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index 212ff395201f..04236a442f3c 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -24,7 +24,7 @@ #define ADF_RING_BUNDLE_SIZE 0x1000 #define BUILD_RING_BASE_ADDR(addr, size) \ - (((addr) >> 6) & (0xFFFFFFFFFFFFFFFFULL << (size))) + (((addr) >> 6) & (GENMASK_ULL(63, 0) << (size))) #define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \ ADF_RING_CSR_RING_HEAD + ((ring) << 2)) diff --git a/drivers/crypto/qat/qat_common/adf_sriov.c b/drivers/crypto/qat/qat_common/adf_sriov.c index dde6c57ef15a..0e8eab057d2d 100644 --- a/drivers/crypto/qat/qat_common/adf_sriov.c +++ b/drivers/crypto/qat/qat_common/adf_sriov.c @@ -99,7 +99,7 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev) pci_disable_sriov(accel_to_pci_dev(accel_dev)); /* Disable VF to PF interrupts */ - adf_disable_vf2pf_interrupts(accel_dev, 0xFFFFFFFF); + adf_disable_vf2pf_interrupts(accel_dev, GENMASK(31, 0)); /* Clear Valid bits in AE Thread to PCIe Function Mapping */ hw_data->configure_iov_threads(accel_dev, false); From patchwork Mon Oct 12 20:38:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F2C0C43457 for ; Mon, 12 Oct 2020 20:39:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E10E214DB for ; Mon, 12 Oct 2020 20:39:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387766AbgJLUjs (ORCPT ); Mon, 12 Oct 2020 16:39:48 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387764AbgJLUjm (ORCPT ); Mon, 12 Oct 2020 16:39:42 -0400 IronPort-SDR: 5xVaKrcN5XjP5xLl13deA5Aiw6ArgOMxWCiuYk+3ERRv5l1dQ3MAiz9x8m/bO+bswzjoekvrY2 s9ET87nO3pwg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913159" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913159" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:41 -0700 IronPort-SDR: uKjXrXQR7v9QcxDPnQyx2aLhlZPfFmSXEINEHuh07Z7ZP5QEwNSzPkEAP+68ahNvxXvemYuIMA klYjrOCW6aIQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328232" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:40 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 21/31] crypto: qat - use BIT_ULL() - 1 pattern for masks Date: Mon, 12 Oct 2020 21:38:37 +0100 Message-Id: <20201012203847.340030-22-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Replace occurrences of the pattern GENMASK_ULL(var - 1, 0)) with BIT_ULL(var) - 1 since it produces better code and it is easier to read. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_sriov.c | 2 +- drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_sriov.c b/drivers/crypto/qat/qat_common/adf_sriov.c index 0e8eab057d2d..9a0f6db83106 100644 --- a/drivers/crypto/qat/qat_common/adf_sriov.c +++ b/drivers/crypto/qat/qat_common/adf_sriov.c @@ -65,7 +65,7 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev) hw_data->configure_iov_threads(accel_dev, true); /* Enable VF to PF interrupts for all VFs */ - adf_enable_vf2pf_interrupts(accel_dev, GENMASK_ULL(totalvfs - 1, 0)); + adf_enable_vf2pf_interrupts(accel_dev, BIT_ULL(totalvfs) - 1); /* * Due to the hardware design, when SR-IOV and the ring arbiter diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index 7970ebb67f28..1e83d9397b11 100644 --- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -195,7 +195,7 @@ static void adf_enable_ints(struct adf_accel_dev *accel_dev) /* Enable bundle and misc interrupts */ ADF_CSR_WR(addr, ADF_DH895XCC_SMIAPF0_MASK_OFFSET, accel_dev->pf.vf_info ? 0 : - GENMASK_ULL(GET_MAX_BANKS(accel_dev) - 1, 0)); + BIT_ULL(GET_MAX_BANKS(accel_dev)) - 1); ADF_CSR_WR(addr, ADF_DH895XCC_SMIAPF1_MASK_OFFSET, ADF_DH895XCC_SMIA1_MASK); } From patchwork Mon Oct 12 20:38:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BC18C433E7 for ; Mon, 12 Oct 2020 20:39:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04A94208D5 for ; Mon, 12 Oct 2020 20:39:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387843AbgJLUjs (ORCPT ); Mon, 12 Oct 2020 16:39:48 -0400 Received: from mga09.intel.com ([134.134.136.24]:34076 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387766AbgJLUjo (ORCPT ); Mon, 12 Oct 2020 16:39:44 -0400 IronPort-SDR: gxfXxc2G3aO+5F1kJH+UFyLAV2oVG5CRpOovA7UgaEop17DzCnshNhS7f93fRWyT+FONgIkvmU gNnGH+7t0+QQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913166" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913166" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:43 -0700 IronPort-SDR: 6fo66sZ+ViTz8OSPqzOKfS1HwGR18AjdPmDoIUYsryzl/33ope3FIQboI/wjQVOnrkEkhuaQNu /6ilLXsp/Z8A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328241" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:41 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Maksim Lukoshkov , Andy Shevchenko Subject: [PATCH 22/31] crypto: qat - abstract writes to arbiter enable Date: Mon, 12 Oct 2020 21:38:38 +0100 Message-Id: <20201012203847.340030-23-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Abstract writes to the service arbiter enable register. This is in preparation for the introduction of the qat_4xxx driver since the arbitration enable register differes between QAT GEN2 and QAT GEN4 devices. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Maksim Lukoshkov Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_accel_devices.h | 2 ++ drivers/crypto/qat/qat_common/adf_gen2_hw_data.c | 7 +++++++ drivers/crypto/qat/qat_common/adf_gen2_hw_data.h | 6 ++++++ drivers/crypto/qat/qat_common/adf_hw_arbiter.c | 15 +++++---------- 4 files changed, 20 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index 1fd32c56b119..a3b63dfe4d7b 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -133,6 +133,8 @@ struct adf_hw_csr_ops { u32 value); void (*write_csr_int_flag_and_col)(void __iomem *csr_base_addr, u32 bank, u32 value); + void (*write_csr_ring_srv_arb_en)(void __iomem *csr_base_addr, u32 bank, + u32 value); }; struct adf_cfg_device_data; diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c index 5de359165ab4..1aa17303838d 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.c @@ -127,6 +127,12 @@ static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank, WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value); } +static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank, + u32 value) +{ + WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value); +} + void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) { csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr; @@ -142,6 +148,7 @@ void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) csr_ops->write_csr_int_col_en = write_csr_int_col_en; csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl; csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col; + csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en; } EXPORT_SYMBOL_GPL(adf_gen2_init_hw_csr_ops); diff --git a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h index 04236a442f3c..3816e6500352 100644 --- a/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/qat/qat_common/adf_gen2_hw_data.h @@ -103,6 +103,12 @@ do { \ #define ADF_ARB_OFFSET 0x30000 #define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180 #define ADF_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0)) +#define ADF_ARB_REG_SLOT 0x1000 +#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C + +#define WRITE_CSR_RING_SRV_ARB_EN(csr_addr, index, value) \ + ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ + (ADF_ARB_REG_SLOT * (index)), value) /* Power gating */ #define ADF_POWERGATE_PKE BIT(24) diff --git a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c index bd03c8f54eb4..9f5240d9488b 100644 --- a/drivers/crypto/qat/qat_common/adf_hw_arbiter.c +++ b/drivers/crypto/qat/qat_common/adf_hw_arbiter.c @@ -6,12 +6,6 @@ #define ADF_ARB_NUM 4 #define ADF_ARB_REG_SIZE 0x4 -#define ADF_ARB_REG_SLOT 0x1000 -#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C - -#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ - (ADF_ARB_REG_SLOT * index), value) #define WRITE_CSR_ARB_SARCONFIG(csr_addr, arb_offset, index, value) \ ADF_CSR_WR(csr_addr, (arb_offset) + \ @@ -57,6 +51,7 @@ void adf_update_ring_arb(struct adf_etr_ring_data *ring) { struct adf_accel_dev *accel_dev = ring->bank->accel_dev; struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); u32 tx_ring_mask = hw_data->tx_rings_mask; u32 shift = hw_data->tx_rx_gap; u32 arben, arben_tx, arben_rx; @@ -73,14 +68,14 @@ void adf_update_ring_arb(struct adf_etr_ring_data *ring) arben_rx = (ring->bank->ring_mask & rx_ring_mask) >> shift; arben = arben_tx & arben_rx; - WRITE_CSR_ARB_RINGSRVARBEN(ring->bank->csr_addr, - ring->bank->bank_number, - arben); + csr_ops->write_csr_ring_srv_arb_en(ring->bank->csr_addr, + ring->bank->bank_number, arben); } void adf_exit_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; + struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); u32 arb_off, wt_off; struct arb_info info; void __iomem *csr; @@ -107,6 +102,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev) /* Disable arbitration on all rings */ for (i = 0; i < GET_MAX_BANKS(accel_dev); i++) - WRITE_CSR_ARB_RINGSRVARBEN(csr, i, 0); + csr_ops->write_csr_ring_srv_arb_en(csr, i, 0); } EXPORT_SYMBOL_GPL(adf_exit_arb); From patchwork Mon Oct 12 20:38:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19722C43467 for ; Mon, 12 Oct 2020 20:39:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF21920BED for ; Mon, 12 Oct 2020 20:39:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387764AbgJLUjs (ORCPT ); Mon, 12 Oct 2020 16:39:48 -0400 Received: from mga09.intel.com ([134.134.136.24]:34081 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387830AbgJLUjq (ORCPT ); Mon, 12 Oct 2020 16:39:46 -0400 IronPort-SDR: GCuXdty2PwHmOECC5byjjSnaSW6tI+QCfCcO/CTrTPa7dGRntBtkvnzNN9py7t7RvQrCHmEpWT PU5Jz+/P5zOQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913172" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913172" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:45 -0700 IronPort-SDR: Owa7jcMrtMb6d5ed6JugewsNtLX+IfSHFPH6SURJIDr3JrJsiAxNNxn4rnaZ7b3PUJKHhASBTX Pz6DdWDdq6HQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328249" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:43 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Fiona Trahe , Andy Shevchenko Subject: [PATCH 23/31] crypto: qat - remove hardcoded bank irq clear flag mask Date: Mon, 12 Oct 2020 21:38:39 +0100 Message-Id: <20201012203847.340030-24-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Replace hardcoded value of the bank interrupt clear flag mask with a value calculated on the fly which is based on the number of rings present in a bank. This is to support devices that have a number of rings per bank different than 16. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_transport.c | 4 ++-- drivers/crypto/qat/qat_common/adf_transport_access_macros.h | 1 - 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c index dd8f94fcb9a8..5a7030acdc33 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.c +++ b/drivers/crypto/qat/qat_common/adf_transport.c @@ -374,6 +374,7 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, struct adf_hw_device_data *hw_data = accel_dev->hw_device; u8 num_rings_per_bank = hw_data->num_rings_per_bank; struct adf_hw_csr_ops *csr_ops = &hw_data->csr_ops; + u32 irq_mask = BIT(num_rings_per_bank) - 1; struct adf_etr_ring_data *ring; struct adf_etr_ring_data *tx_ring; u32 i, coalesc_enabled = 0; @@ -431,8 +432,7 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev, goto err; } - csr_ops->write_csr_int_flag(csr_addr, bank_num, - ADF_BANK_INT_FLAG_CLEAR_MASK); + csr_ops->write_csr_int_flag(csr_addr, bank_num, irq_mask); csr_ops->write_csr_int_srcsel(csr_addr, bank_num); return 0; diff --git a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h index 12b1605a740e..3b6b0267bbec 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_access_macros.h +++ b/drivers/crypto/qat/qat_common/adf_transport_access_macros.h @@ -4,7 +4,6 @@ #define ADF_TRANSPORT_ACCESS_MACROS_H #include "adf_accel_devices.h" -#define ADF_BANK_INT_FLAG_CLEAR_MASK 0xFFFF #define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A #define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05 #define ADF_COALESCING_MIN_TIME 0x1FF From patchwork Mon Oct 12 20:38:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 652BCC433DF for ; Mon, 12 Oct 2020 20:39:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 309E220FC3 for ; Mon, 12 Oct 2020 20:39:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387830AbgJLUjt (ORCPT ); Mon, 12 Oct 2020 16:39:49 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387832AbgJLUjs (ORCPT ); Mon, 12 Oct 2020 16:39:48 -0400 IronPort-SDR: ZoNgFawHvz1k5nUXQDj3d7lXTICwG18PUJ0niECKfHZoY5cg5Z5PfeMCDXlH5K5zL8wph9SZgF 3CdIaXwXOpyQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913177" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913177" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:47 -0700 IronPort-SDR: Gqc2kek6egE+CeJwe2giYf9sWg+J27VKniKjYqjba9lvjQsKMmtBZH4jzpBvA7qW2UKdBkSz8E ul8lyFnK6pJQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328256" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:45 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Maksim Lukoshkov , Fiona Trahe , Andy Shevchenko Subject: [PATCH 24/31] crypto: qat - call functions in adf_sriov if available Date: Mon, 12 Oct 2020 21:38:40 +0100 Message-Id: <20201012203847.340030-25-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Call the function configure_iov_threads(), adf_enable_vf2pf_interrupts() and adf_pf2vf_notify_restarting() only if present in the struct adf_hw_device_data of the device. This is to allow for QAT drivers that do not implement those functions. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Maksim Lukoshkov Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_sriov.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_sriov.c b/drivers/crypto/qat/qat_common/adf_sriov.c index 9a0f6db83106..d887640355d4 100644 --- a/drivers/crypto/qat/qat_common/adf_sriov.c +++ b/drivers/crypto/qat/qat_common/adf_sriov.c @@ -62,10 +62,12 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev) } /* Set Valid bits in AE Thread to PCIe Function Mapping */ - hw_data->configure_iov_threads(accel_dev, true); + if (hw_data->configure_iov_threads) + hw_data->configure_iov_threads(accel_dev, true); /* Enable VF to PF interrupts for all VFs */ - adf_enable_vf2pf_interrupts(accel_dev, BIT_ULL(totalvfs) - 1); + if (hw_data->get_pf2vf_offset) + adf_enable_vf2pf_interrupts(accel_dev, BIT_ULL(totalvfs) - 1); /* * Due to the hardware design, when SR-IOV and the ring arbiter @@ -94,15 +96,18 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev) if (!accel_dev->pf.vf_info) return; - adf_pf2vf_notify_restarting(accel_dev); + if (hw_data->get_pf2vf_offset) + adf_pf2vf_notify_restarting(accel_dev); pci_disable_sriov(accel_to_pci_dev(accel_dev)); /* Disable VF to PF interrupts */ - adf_disable_vf2pf_interrupts(accel_dev, GENMASK(31, 0)); + if (hw_data->get_pf2vf_offset) + adf_disable_vf2pf_interrupts(accel_dev, GENMASK(31, 0)); /* Clear Valid bits in AE Thread to PCIe Function Mapping */ - hw_data->configure_iov_threads(accel_dev, false); + if (hw_data->configure_iov_threads) + hw_data->configure_iov_threads(accel_dev, false); for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++) { tasklet_disable(&vf->vf2pf_bh_tasklet); From patchwork Mon Oct 12 20:38:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F56C43467 for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BF53214DB for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387883AbgJLUjy (ORCPT ); Mon, 12 Oct 2020 16:39:54 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387832AbgJLUju (ORCPT ); Mon, 12 Oct 2020 16:39:50 -0400 IronPort-SDR: QbQsRMlpqrZqS9DArUQov9jKiBuvYBDqmQz/QCKv3gunM8tgaCIqcH/fi0UTog76tLLXF/9pQa gzBeYi42tKMg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913178" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913178" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:49 -0700 IronPort-SDR: JyDonXfnAuNJPYX93U7EsdoOP7bSa1XEo8nCg9/rtHRLNGZJfhcixhzWQfLxGnLmR7UTkanPZM aJiUp7hgxcdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328261" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:47 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 25/31] crypto: qat - remove unnecessary void* casts Date: Mon, 12 Oct 2020 21:38:41 +0100 Message-Id: <20201012203847.340030-26-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove superfluous casts to void* in function qat_crypto_dev_config(). Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/qat_crypto.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 089d5d7b738e..ab1716f7044d 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -135,61 +135,61 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev) val = i; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_SIZE, i); val = 128; if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = 512; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_SIZE, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = 0; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_TX, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = 2; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_TX, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = 8; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_RX, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = 10; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_RX, i); if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; val = ADF_COALESCING_DEF_TIME; snprintf(key, sizeof(key), ADF_ETRMGR_COALESCE_TIMER_FORMAT, i); if (adf_cfg_add_key_value_param(accel_dev, "Accelerator0", - key, (void *)&val, ADF_DEC)) + key, &val, ADF_DEC)) goto err; } val = i; if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - ADF_NUM_CY, (void *)&val, ADF_DEC)) + ADF_NUM_CY, &val, ADF_DEC)) goto err; set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); From patchwork Mon Oct 12 20:38:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CBA7C43457 for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18CFD20FC3 for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387922AbgJLUjy (ORCPT ); Mon, 12 Oct 2020 16:39:54 -0400 Received: from mga09.intel.com ([134.134.136.24]:34108 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387883AbgJLUjv (ORCPT ); Mon, 12 Oct 2020 16:39:51 -0400 IronPort-SDR: wEbq0Uvyg9ZPS0QZNQPO7N6Aoac+1TsCtx+73Cb+tg/jYK6wM6FiOpPVlBRGmcrQ9WIuGmf+JR /OFAU2+a/Vmg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913181" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913181" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:50 -0700 IronPort-SDR: jpt2ntFxNyvHqKF0nWyf/+k/WKxi3JIw+xpppIZ4xy24osKhAUQ+jtuzMqyinxJ4c3IogjkfuT 9dsQlqlTAcuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328266" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:49 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 26/31] crypto: qat - change return value in adf_cfg_add_key_value_param() Date: Mon, 12 Oct 2020 21:38:42 +0100 Message-Id: <20201012203847.340030-27-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org If the parameter type provided to adf_cfg_add_key_value_param() is invalid, return -EINVAL instead of -1 that is treated as -EPERM and may confuse. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_cfg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/adf_cfg.c b/drivers/crypto/qat/qat_common/adf_cfg.c index 22ae32838113..f2a29c70d61a 100644 --- a/drivers/crypto/qat/qat_common/adf_cfg.c +++ b/drivers/crypto/qat/qat_common/adf_cfg.c @@ -243,7 +243,7 @@ int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev, } else { dev_err(&GET_DEV(accel_dev), "Unknown type given.\n"); kfree(key_val); - return -1; + return -EINVAL; } key_val->type = type; down_write(&cfg->lock); From patchwork Mon Oct 12 20:38:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08540C4363A for ; Mon, 12 Oct 2020 20:39:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0EAD20FC3 for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387936AbgJLUjy (ORCPT ); Mon, 12 Oct 2020 16:39:54 -0400 Received: from mga09.intel.com ([134.134.136.24]:34122 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387887AbgJLUjw (ORCPT ); Mon, 12 Oct 2020 16:39:52 -0400 IronPort-SDR: hGBxlh11juclquOQ3QUOR/o+HjF4dZZ0Z6XdWqFTK65xXdpk33HcS3wH7lDw3WDbJOf9vZkDcq pzVu1fEo3NrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913184" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913184" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:51 -0700 IronPort-SDR: A0k7SAbh8+GeN4FdgbWs9urU+nI2JpZvQ5HE789tsfeKYG/rLCdGjuNjs9vKverCwFK7n8Lavh +DLCoz6eQHVQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328272" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:50 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 27/31] crypto: qat - change return value in adf_cfg_key_val_get() Date: Mon, 12 Oct 2020 21:38:43 +0100 Message-Id: <20201012203847.340030-28-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org If a key is not found in the internal key value storage, return -ENODATA instead of -1 that is treated as -EPERM and may confuse. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_cfg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/adf_cfg.c b/drivers/crypto/qat/qat_common/adf_cfg.c index f2a29c70d61a..575b6f002303 100644 --- a/drivers/crypto/qat/qat_common/adf_cfg.c +++ b/drivers/crypto/qat/qat_common/adf_cfg.c @@ -196,7 +196,7 @@ static int adf_cfg_key_val_get(struct adf_accel_dev *accel_dev, memcpy(val, keyval->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES); return 0; } - return -1; + return -ENODATA; } /** From patchwork Mon Oct 12 20:38:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 306ECC433E7 for ; Mon, 12 Oct 2020 20:39:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E68CE214DB for ; Mon, 12 Oct 2020 20:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387832AbgJLUjy (ORCPT ); Mon, 12 Oct 2020 16:39:54 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387909AbgJLUjy (ORCPT ); Mon, 12 Oct 2020 16:39:54 -0400 IronPort-SDR: GA+iNWqk+HwHLTBYkbjXlsC8EvlPxYsWZ60QDvy+WCCd5DODlKJX6eR2wskQVIW/ulrTm59Yl7 vVV8DfDO0fXg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913188" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913188" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:53 -0700 IronPort-SDR: bfgqdKCu8tL213Osbko6Zih9XSYanSLTZTtkhuooksY65/EYTFKMWsamMA4U34B2DTzMAwdZhc F1PgcLtsAELg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328279" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:51 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 28/31] crypto: qat - refactor qat_crypto_create_instances() Date: Mon, 12 Oct 2020 21:38:44 +0100 Message-Id: <20201012203847.340030-29-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Refactor function qat_crypto_create_instances() to propagate errors to the caller. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/qat_crypto.c | 68 +++++++++++++--------- 1 file changed, 41 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index ab1716f7044d..735463042987 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -202,83 +202,97 @@ EXPORT_SYMBOL_GPL(qat_crypto_dev_config); static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev) { - int i; - unsigned long bank; - unsigned long num_inst, num_msg_sym, num_msg_asym; - int msg_size; - struct qat_crypto_instance *inst; + unsigned long bank, num_inst, num_msg_sym, num_msg_asym; char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + struct qat_crypto_instance *inst; + int msg_size; + int ret; + int i; INIT_LIST_HEAD(&accel_dev->crypto_list); - if (adf_cfg_get_param_value(accel_dev, SEC, ADF_NUM_CY, val)) - return -EFAULT; + ret = adf_cfg_get_param_value(accel_dev, SEC, ADF_NUM_CY, val); + if (ret) + return ret; - if (kstrtoul(val, 0, &num_inst)) - return -EFAULT; + ret = kstrtoul(val, 0, &num_inst); + if (ret) + return ret; for (i = 0; i < num_inst; i++) { inst = kzalloc_node(sizeof(*inst), GFP_KERNEL, dev_to_node(&GET_DEV(accel_dev))); - if (!inst) + if (!inst) { + ret = -ENOMEM; goto err; + } list_add_tail(&inst->list, &accel_dev->crypto_list); inst->id = i; atomic_set(&inst->refctr, 0); inst->accel_dev = accel_dev; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i); - if (adf_cfg_get_param_value(accel_dev, SEC, key, val)) + ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); + if (ret) goto err; - if (kstrtoul(val, 10, &bank)) + ret = kstrtoul(val, 10, &bank); + if (ret) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_SIZE, i); - if (adf_cfg_get_param_value(accel_dev, SEC, key, val)) + ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); + if (ret) goto err; - if (kstrtoul(val, 10, &num_msg_sym)) + ret = kstrtoul(val, 10, &num_msg_sym); + if (ret) goto err; num_msg_sym = num_msg_sym >> 1; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_SIZE, i); - if (adf_cfg_get_param_value(accel_dev, SEC, key, val)) + ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); + if (ret) goto err; - if (kstrtoul(val, 10, &num_msg_asym)) + ret = kstrtoul(val, 10, &num_msg_asym); + if (ret) goto err; num_msg_asym = num_msg_asym >> 1; msg_size = ICP_QAT_FW_REQ_DEFAULT_SZ; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_TX, i); - if (adf_create_ring(accel_dev, SEC, bank, num_msg_sym, - msg_size, key, NULL, 0, &inst->sym_tx)) + ret = adf_create_ring(accel_dev, SEC, bank, num_msg_sym, + msg_size, key, NULL, 0, &inst->sym_tx); + if (ret) goto err; msg_size = msg_size >> 1; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_TX, i); - if (adf_create_ring(accel_dev, SEC, bank, num_msg_asym, - msg_size, key, NULL, 0, &inst->pke_tx)) + ret = adf_create_ring(accel_dev, SEC, bank, num_msg_asym, + msg_size, key, NULL, 0, &inst->pke_tx); + if (ret) goto err; msg_size = ICP_QAT_FW_RESP_DEFAULT_SZ; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_RX, i); - if (adf_create_ring(accel_dev, SEC, bank, num_msg_sym, - msg_size, key, qat_alg_callback, 0, - &inst->sym_rx)) + ret = adf_create_ring(accel_dev, SEC, bank, num_msg_sym, + msg_size, key, qat_alg_callback, 0, + &inst->sym_rx); + if (ret) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_RX, i); - if (adf_create_ring(accel_dev, SEC, bank, num_msg_asym, - msg_size, key, qat_alg_asym_callback, 0, - &inst->pke_rx)) + ret = adf_create_ring(accel_dev, SEC, bank, num_msg_asym, + msg_size, key, qat_alg_asym_callback, 0, + &inst->pke_rx); + if (ret) goto err; } return 0; err: qat_crypto_free_instances(accel_dev); - return -ENOMEM; + return ret; } static int qat_crypto_init(struct adf_accel_dev *accel_dev) From patchwork Mon Oct 12 20:38:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B20AEC433E7 for ; Mon, 12 Oct 2020 20:40:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F04B20BED for ; Mon, 12 Oct 2020 20:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388020AbgJLUj7 (ORCPT ); Mon, 12 Oct 2020 16:39:59 -0400 Received: from mga09.intel.com ([134.134.136.24]:34024 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387887AbgJLUjz (ORCPT ); Mon, 12 Oct 2020 16:39:55 -0400 IronPort-SDR: EfcMx+J/LuFjpGw6DwWEyGHd5p/CdYR/1dXW++QLCZcoFbcYMV5clMRNtQsqoI6I7kqV4FKgfQ yT3/noRhsYCA== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913193" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913193" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:54 -0700 IronPort-SDR: sTNA7sHKVc6zTaqZ92UYqLiKfVkYHHc+IxkqO7GR37oms4DsrpuKMJCUmMmiptUdYLLqa1mSgL 9dl2MUmC+bBg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328283" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:53 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Andy Shevchenko Subject: [PATCH 29/31] crypto: qat - refactor qat_crypto_dev_config() Date: Mon, 12 Oct 2020 21:38:45 +0100 Message-Id: <20201012203847.340030-30-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Refactor function qat_crypto_dev_config() to propagate errors to the caller. Suggested-by: Andy Shevchenko Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/qat_crypto.c | 67 +++++++++++++--------- 1 file changed, 41 insertions(+), 26 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 735463042987..8d8ac5e48f47 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -115,88 +115,103 @@ struct qat_crypto_instance *qat_crypto_get_instance_node(int node) */ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev) { - int cpus = num_online_cpus(); - int banks = GET_MAX_BANKS(accel_dev); char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; - int i; + int banks = GET_MAX_BANKS(accel_dev); + int cpus = num_online_cpus(); unsigned long val; int instances; + int ret; + int i; if (adf_hw_dev_has_crypto(accel_dev)) instances = min(cpus, banks); else instances = 0; - if (adf_cfg_section_add(accel_dev, ADF_KERNEL_SEC)) + ret = adf_cfg_section_add(accel_dev, ADF_KERNEL_SEC); + if (ret) goto err; - if (adf_cfg_section_add(accel_dev, "Accelerator0")) + + ret = adf_cfg_section_add(accel_dev, "Accelerator0"); + if (ret) goto err; + for (i = 0; i < instances; i++) { val = i; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_SIZE, i); val = 128; - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = 512; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_SIZE, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = 0; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_TX, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = 2; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_TX, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = 8; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_RX, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = 10; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_RX, i); - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) goto err; val = ADF_COALESCING_DEF_TIME; snprintf(key, sizeof(key), ADF_ETRMGR_COALESCE_TIMER_FORMAT, i); - if (adf_cfg_add_key_value_param(accel_dev, "Accelerator0", - key, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, "Accelerator0", + key, &val, ADF_DEC); + if (ret) goto err; } val = i; - if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, - ADF_NUM_CY, &val, ADF_DEC)) + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_CY, + &val, ADF_DEC); + if (ret) goto err; set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); return 0; err: dev_err(&GET_DEV(accel_dev), "Failed to start QAT accel dev\n"); - return -EINVAL; + return ret; } EXPORT_SYMBOL_GPL(qat_crypto_dev_config); From patchwork Mon Oct 12 20:38:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 269553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D8BCC43457 for ; Mon, 12 Oct 2020 20:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51EF6214DB for ; Mon, 12 Oct 2020 20:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387999AbgJLUkA (ORCPT ); Mon, 12 Oct 2020 16:40:00 -0400 Received: from mga09.intel.com ([134.134.136.24]:34137 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387909AbgJLUj5 (ORCPT ); Mon, 12 Oct 2020 16:39:57 -0400 IronPort-SDR: yxh5N5H8mQHaplGw+jtuE7utl7HHb1DLZYr486lnPxZFKcuHMGaAu+WmcTsRi7HLx9w9zcOp2k uBfYYgs2GQlg== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913196" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913196" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:56 -0700 IronPort-SDR: KDYKWK4O8xChBPSqMM/etGeoOoUR19MfxyBYcVgSuhCl2VEIXi5qPkxSrAP8744QRj4mp2hNix XiGct5l3AL6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328287" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:55 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Fiona Trahe , Andy Shevchenko Subject: [PATCH 30/31] crypto: qat - allow for instances in different banks Date: Mon, 12 Oct 2020 21:38:46 +0100 Message-Id: <20201012203847.340030-31-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Allow for crypto instances to be configured with symmetric crypto rings that belong to a bank that is different from the one where asymmetric crypto rings are located. This is to allow for devices with banks made of a single ring pair. In these, crypto instances will be composed of two separate banks. Changed string literals are not exposed to the user space. Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- .../crypto/qat/qat_common/adf_cfg_strings.h | 3 +- drivers/crypto/qat/qat_common/qat_crypto.c | 34 ++++++++++++++----- 2 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_cfg_strings.h b/drivers/crypto/qat/qat_common/adf_cfg_strings.h index 314790f5b0af..09651e1f937a 100644 --- a/drivers/crypto/qat/qat_common/adf_cfg_strings.h +++ b/drivers/crypto/qat/qat_common/adf_cfg_strings.h @@ -18,7 +18,8 @@ #define ADF_RING_DC_TX "RingTx" #define ADF_RING_DC_RX "RingRx" #define ADF_ETRMGR_BANK "Bank" -#define ADF_RING_BANK_NUM "BankNumber" +#define ADF_RING_SYM_BANK_NUM "BankSymNumber" +#define ADF_RING_ASYM_BANK_NUM "BankAsymNumber" #define ADF_CY "Cy" #define ADF_DC "Dc" #define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled" diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 8d8ac5e48f47..ece6776fbd53 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -138,7 +138,13 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev) for (i = 0; i < instances; i++) { val = i; - snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i); + snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i); + ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, + key, &val, ADF_DEC); + if (ret) + goto err; + + snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_BANK_NUM, i); ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, key, &val, ADF_DEC); if (ret) @@ -217,9 +223,10 @@ EXPORT_SYMBOL_GPL(qat_crypto_dev_config); static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev) { - unsigned long bank, num_inst, num_msg_sym, num_msg_asym; + unsigned long num_inst, num_msg_sym, num_msg_asym; char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + unsigned long sym_bank, asym_bank; struct qat_crypto_instance *inst; int msg_size; int ret; @@ -246,14 +253,25 @@ static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev) inst->id = i; atomic_set(&inst->refctr, 0); inst->accel_dev = accel_dev; - snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_BANK_NUM, i); + + snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_BANK_NUM, i); + ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); + if (ret) + goto err; + + ret = kstrtoul(val, 10, &sym_bank); + if (ret) + goto err; + + snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i); ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); if (ret) goto err; - ret = kstrtoul(val, 10, &bank); + ret = kstrtoul(val, 10, &asym_bank); if (ret) goto err; + snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_SIZE, i); ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); if (ret) @@ -277,28 +295,28 @@ static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev) msg_size = ICP_QAT_FW_REQ_DEFAULT_SZ; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_TX, i); - ret = adf_create_ring(accel_dev, SEC, bank, num_msg_sym, + ret = adf_create_ring(accel_dev, SEC, sym_bank, num_msg_sym, msg_size, key, NULL, 0, &inst->sym_tx); if (ret) goto err; msg_size = msg_size >> 1; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_TX, i); - ret = adf_create_ring(accel_dev, SEC, bank, num_msg_asym, + ret = adf_create_ring(accel_dev, SEC, asym_bank, num_msg_asym, msg_size, key, NULL, 0, &inst->pke_tx); if (ret) goto err; msg_size = ICP_QAT_FW_RESP_DEFAULT_SZ; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_SYM_RX, i); - ret = adf_create_ring(accel_dev, SEC, bank, num_msg_sym, + ret = adf_create_ring(accel_dev, SEC, sym_bank, num_msg_sym, msg_size, key, qat_alg_callback, 0, &inst->sym_rx); if (ret) goto err; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_RX, i); - ret = adf_create_ring(accel_dev, SEC, bank, num_msg_asym, + ret = adf_create_ring(accel_dev, SEC, asym_bank, num_msg_asym, msg_size, key, qat_alg_asym_callback, 0, &inst->pke_rx); if (ret) From patchwork Mon Oct 12 20:38:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 285435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6011CC433DF for ; Mon, 12 Oct 2020 20:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DD3720BED for ; Mon, 12 Oct 2020 20:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388023AbgJLUkA (ORCPT ); Mon, 12 Oct 2020 16:40:00 -0400 Received: from mga09.intel.com ([134.134.136.24]:34144 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387999AbgJLUj6 (ORCPT ); Mon, 12 Oct 2020 16:39:58 -0400 IronPort-SDR: +NHqlfjUxLQfu3VoS5TnNp9mRIW0eOvVRetf3hg95VEDVJZAZopuz+EWjujQOAsvy3NJahEcqA npUJIg6jq+oA== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165913202" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="165913202" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 13:39:58 -0700 IronPort-SDR: eGYZzlgQj3e60Gk4WmIBNu4ZzMiy2xpo3y7irtR5q8NEMEITrCafI2hBeGVkUH5qTGv2Y3H9z0 PKbKLQAJekyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="299328295" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga007.fm.intel.com with ESMTP; 12 Oct 2020 13:39:56 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu , Wojciech Ziemba , Fiona Trahe , Andy Shevchenko Subject: [PATCH 31/31] crypto: qat - extend ae_mask Date: Mon, 12 Oct 2020 21:38:47 +0100 Message-Id: <20201012203847.340030-32-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012203847.340030-1-giovanni.cabiddu@intel.com> References: <20201012203847.340030-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Change type of ae_mask in adf_hw_device_data to allow for devices with more than 16 Acceleration Engines (AEs). Signed-off-by: Giovanni Cabiddu Reviewed-by: Wojciech Ziemba Reviewed-by: Fiona Trahe Reviewed-by: Andy Shevchenko --- drivers/crypto/qat/qat_common/adf_accel_devices.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_common/adf_accel_devices.h b/drivers/crypto/qat/qat_common/adf_accel_devices.h index a3b63dfe4d7b..996d25565b11 100644 --- a/drivers/crypto/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/qat/qat_common/adf_accel_devices.h @@ -181,7 +181,7 @@ struct adf_hw_device_data { u32 accel_capabilities_mask; u32 instance_id; u16 accel_mask; - u16 ae_mask; + u32 ae_mask; u32 admin_ae_mask; u16 tx_rings_mask; u8 tx_rx_gap;