From patchwork Wed Apr 30 11:34:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886559 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7908525A2D5 for ; Wed, 30 Apr 2025 11:35:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012906; cv=none; b=FuTAMNkZtz3tM0l2aPrLcyy2Mtl/R0tGoMALXEz2VE79Eh1LYjkSuPHaQvecP1TeUYTG+iLiDuZRyKx6xiHRrIWMaEdrCJrYuwiGrjP/G7ldABgNeOFzdkql+akAJVcvGBnJ0VemlvfgHlXpU4xpg5uTpyWbBfrFP0MmTGy8QaQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012906; c=relaxed/simple; bh=N9RUB1MDGzwBCeWkiryBl8d8kEaV9FW93f6nyYo5vDc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EtnywcOEIfk9wxXm4B8Vmfnx7qKmBmqdhR7Ar4eYcJo5pgvbawbb7UgJy19TDiG4LCibezNRoFwbA2rsB2QGVUYp4IvEEABsSYPpAe1hWIGrRLpxHeG+z8z+MZURvo64O8DUxANWhSWtaPZrT1fggjAMhnBaI63wLXMRDshVTWM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=a78nIYn7; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a78nIYn7" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012904; x=1777548904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=N9RUB1MDGzwBCeWkiryBl8d8kEaV9FW93f6nyYo5vDc=; b=a78nIYn7E1XeAFDIl2+SI6c55Xbk3smIe4GERJ4thaD5MjyZCy7ck2E+ VWVsvBS3vsUAX9svGfAPlgKon9eMYE8UUwWz321AiFPfNvwM5zexhH/Gj A3nClSD4G/V6dIjLwSn3fPqXBZKHDrDyIJyp6wiCQSA50/eqwOINLC/dd 0awWcDCj+UlOylBISGOMCHnwSxuuLPIJgtl3ij+F8uaQqy0saI3/SoNlG xPYZoH80e+HI/8zMQImMWL5RNba6YcLUPftFRMT0MroyTWCE/LsovIriQ 8tuXTEsGM3NPVMqZ3ylKNY28GQVqujeztuHgW+b1md0ZnjPh781Ay/5Gm w==; X-CSE-ConnectionGUID: ys+9br5YSIm+rKHWQ6kLFQ== X-CSE-MsgGUID: pWpz+d1FQcyQGueref7giw== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331133" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331133" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:04 -0700 X-CSE-ConnectionGUID: nSHs2EARQumdVRBDRffofA== X-CSE-MsgGUID: QskSHKkTT9G4eXyn2MYglQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812509" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:02 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 01/11] crypto: qat - rename and relocate timer logic Date: Wed, 30 Apr 2025 12:34:43 +0100 Message-Id: <20250430113453.1587497-2-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: George Abraham P Rename adf_gen4_timer.c to adf_timer.c and adf_gen4_timer.h to adf_timer.h to make the files generation-agnostic. This includes renaming the start() and stop() timer APIs and macro definitions to be generic, allowing for reuse across different device generations. This does not introduce any functional changes. Signed-off-by: George Abraham P Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- .../intel/qat/qat_420xx/adf_420xx_hw_data.c | 6 +++--- .../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 6 +++--- drivers/crypto/intel/qat/qat_common/Makefile | 2 +- .../{adf_gen4_timer.c => adf_timer.c} | 18 +++++++++--------- .../{adf_gen4_timer.h => adf_timer.h} | 10 +++++----- 5 files changed, 21 insertions(+), 21 deletions(-) rename drivers/crypto/intel/qat/qat_common/{adf_gen4_timer.c => adf_timer.c} (78%) rename drivers/crypto/intel/qat/qat_common/{adf_gen4_timer.h => adf_timer.h} (58%) diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c index 795f4598400b..5817b3164185 100644 --- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c @@ -15,9 +15,9 @@ #include #include #include -#include #include #include +#include #include "adf_420xx_hw_data.h" #include "icp_qat_hw.h" @@ -468,8 +468,8 @@ void adf_init_hw_data_420xx(struct adf_hw_device_data *hw_data, u32 dev_id) hw_data->enable_pm = adf_gen4_enable_pm; hw_data->handle_pm_interrupt = adf_gen4_handle_pm_interrupt; hw_data->dev_config = adf_gen4_dev_config; - hw_data->start_timer = adf_gen4_timer_start; - hw_data->stop_timer = adf_gen4_timer_stop; + hw_data->start_timer = adf_timer_start; + hw_data->stop_timer = adf_timer_stop; hw_data->get_hb_clock = adf_gen4_get_heartbeat_clock; hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE; hw_data->clock_frequency = ADF_420XX_AE_FREQ; diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c index 7d4c366aa8b2..2d89d4a3a7b9 100644 --- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c @@ -15,9 +15,9 @@ #include #include #include "adf_gen4_ras.h" -#include #include #include +#include #include "adf_4xxx_hw_data.h" #include "icp_qat_hw.h" @@ -454,8 +454,8 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id) hw_data->enable_pm = adf_gen4_enable_pm; hw_data->handle_pm_interrupt = adf_gen4_handle_pm_interrupt; hw_data->dev_config = adf_gen4_dev_config; - hw_data->start_timer = adf_gen4_timer_start; - hw_data->stop_timer = adf_gen4_timer_stop; + hw_data->start_timer = adf_timer_start; + hw_data->stop_timer = adf_timer_stop; hw_data->get_hb_clock = adf_gen4_get_heartbeat_clock; hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE; hw_data->clock_frequency = ADF_4XXX_AE_FREQ; diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile index af5df29fd2e3..0370eaad42b1 100644 --- a/drivers/crypto/intel/qat/qat_common/Makefile +++ b/drivers/crypto/intel/qat/qat_common/Makefile @@ -19,7 +19,6 @@ intel_qat-y := adf_accel_engine.o \ adf_gen4_hw_data.o \ adf_gen4_pm.o \ adf_gen4_ras.o \ - adf_gen4_timer.o \ adf_gen4_vf_mig.o \ adf_hw_arbiter.o \ adf_init.o \ @@ -30,6 +29,7 @@ intel_qat-y := adf_accel_engine.o \ adf_sysfs.o \ adf_sysfs_ras_counters.o \ adf_sysfs_rl.o \ + adf_timer.o \ adf_transport.o \ qat_algs.o \ qat_algs_send.o \ diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_timer.c b/drivers/crypto/intel/qat/qat_common/adf_timer.c similarity index 78% rename from drivers/crypto/intel/qat/qat_common/adf_gen4_timer.c rename to drivers/crypto/intel/qat/qat_common/adf_timer.c index 35ccb91d6ec1..8962a49f145a 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_timer.c +++ b/drivers/crypto/intel/qat/qat_common/adf_timer.c @@ -12,9 +12,9 @@ #include "adf_admin.h" #include "adf_accel_devices.h" #include "adf_common_drv.h" -#include "adf_gen4_timer.h" +#include "adf_timer.h" -#define ADF_GEN4_TIMER_PERIOD_MS 200 +#define ADF_DEFAULT_TIMER_PERIOD_MS 200 /* This periodic update is used to trigger HB, RL & TL fw events */ static void work_handler(struct work_struct *work) @@ -27,16 +27,16 @@ static void work_handler(struct work_struct *work) accel_dev = timer_ctx->accel_dev; adf_misc_wq_queue_delayed_work(&timer_ctx->work_ctx, - msecs_to_jiffies(ADF_GEN4_TIMER_PERIOD_MS)); + msecs_to_jiffies(ADF_DEFAULT_TIMER_PERIOD_MS)); time_periods = div_u64(ktime_ms_delta(ktime_get_real(), timer_ctx->initial_ktime), - ADF_GEN4_TIMER_PERIOD_MS); + ADF_DEFAULT_TIMER_PERIOD_MS); if (adf_send_admin_tim_sync(accel_dev, time_periods)) dev_err(&GET_DEV(accel_dev), "Failed to synchronize qat timer\n"); } -int adf_gen4_timer_start(struct adf_accel_dev *accel_dev) +int adf_timer_start(struct adf_accel_dev *accel_dev) { struct adf_timer *timer_ctx; @@ -50,13 +50,13 @@ int adf_gen4_timer_start(struct adf_accel_dev *accel_dev) INIT_DELAYED_WORK(&timer_ctx->work_ctx, work_handler); adf_misc_wq_queue_delayed_work(&timer_ctx->work_ctx, - msecs_to_jiffies(ADF_GEN4_TIMER_PERIOD_MS)); + msecs_to_jiffies(ADF_DEFAULT_TIMER_PERIOD_MS)); return 0; } -EXPORT_SYMBOL_GPL(adf_gen4_timer_start); +EXPORT_SYMBOL_GPL(adf_timer_start); -void adf_gen4_timer_stop(struct adf_accel_dev *accel_dev) +void adf_timer_stop(struct adf_accel_dev *accel_dev) { struct adf_timer *timer_ctx = accel_dev->timer; @@ -68,4 +68,4 @@ void adf_gen4_timer_stop(struct adf_accel_dev *accel_dev) kfree(timer_ctx); accel_dev->timer = NULL; } -EXPORT_SYMBOL_GPL(adf_gen4_timer_stop); +EXPORT_SYMBOL_GPL(adf_timer_stop); diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_timer.h b/drivers/crypto/intel/qat/qat_common/adf_timer.h similarity index 58% rename from drivers/crypto/intel/qat/qat_common/adf_gen4_timer.h rename to drivers/crypto/intel/qat/qat_common/adf_timer.h index 66a709e7b358..68e5136d6ba1 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_timer.h +++ b/drivers/crypto/intel/qat/qat_common/adf_timer.h @@ -1,8 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* Copyright(c) 2023 Intel Corporation */ -#ifndef ADF_GEN4_TIMER_H_ -#define ADF_GEN4_TIMER_H_ +#ifndef ADF_TIMER_H_ +#define ADF_TIMER_H_ #include #include @@ -15,7 +15,7 @@ struct adf_timer { ktime_t initial_ktime; }; -int adf_gen4_timer_start(struct adf_accel_dev *accel_dev); -void adf_gen4_timer_stop(struct adf_accel_dev *accel_dev); +int adf_timer_start(struct adf_accel_dev *accel_dev); +void adf_timer_stop(struct adf_accel_dev *accel_dev); -#endif /* ADF_GEN4_TIMER_H_ */ +#endif /* ADF_TIMER_H_ */ From patchwork Wed Apr 30 11:34:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886180 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DFEC238171 for ; Wed, 30 Apr 2025 11:35:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012908; cv=none; b=jH/BPpUUbflHHg3Oq0OCZXVgMkUU7Nqy4BFXw1vtXQGGS0v7etui4q48VaAZRFiFHd6b/USzKs572Mtru+Kf1gmgoES675QFYghpYV05CT9FCh9SEBvxoNCHpY1t1K/jJltiAQwf8iy9HkE0XU7P9pV3x9N3IPj26mCP6/8P75Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012908; c=relaxed/simple; bh=eHFfbwXaoZJfVQSAzSXWd4F0jAzqzlPAg9fsMUZRWJQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XcBNUPf/FOSvEa61/D/8ANiu2q/OypW7E+K6oBKHj5uNsW+RJ0UM1LbcAMGLmqRlXxBAmyz2LSp6VuN/+Dwk29bHaWkylzHMoqASZU86kDzWEvXy9biMbasnSt6ks1k+yI1MLi7PsWNef0oQg3dETQRatFUEv6ISYMldXM2ZPbA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JLVL06SN; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JLVL06SN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012906; x=1777548906; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eHFfbwXaoZJfVQSAzSXWd4F0jAzqzlPAg9fsMUZRWJQ=; b=JLVL06SNo8UHixQd8jmhIEVl8yF7e13lcv/Vu7O5cq3KzMm/XOamTvs5 gVVp35bFqL9WQErsdbGvxg3GXKvcpnp53QTN+ptepv6xk9b+Hv3z8Jg3g GSJxIZUh+CbVUzRVFATfdlS75Qkc+p6ZOK8+rxhM0fw8BUW1DpF5YtCHh n/5V1zusMOggl71Ds4z0cZiwNoM+dgWgcmZGKqK4GVecuhuPxpYXz2L8g qwx5Xj+nahL8YmcEJ1xrulkcEh3eiWShI1CqQe/+L4CGtHpQglgfJKsuW 1/uyvfeVPUEIC+RJjcl1otvk7ZhPuPHWhOb3rcZB+D4OzI74wrLD1Jtws g==; X-CSE-ConnectionGUID: HU4/6T7zRKyy2pMaaJTIEQ== X-CSE-MsgGUID: 72ucgNfxSkqcHp319/1J8Q== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331135" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331135" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:06 -0700 X-CSE-ConnectionGUID: N0pqXoOOR2eQQ2eDoP9W5A== X-CSE-MsgGUID: chpXIlVaSt+3GPjF875BIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812524" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:04 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 02/11] crypto: qat - refactor compression template logic Date: Wed, 30 Apr 2025 12:34:44 +0100 Message-Id: <20250430113453.1587497-3-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The logic that generates the compression templates, which are used by to submit compression requests to the QAT device, is very similar between QAT devices and diverges mainly on the HW generation-specific configuration word. This makes the logic that generates the compression and decompression templates common between GEN2 and GEN4 devices and abstracts the generation-specific logic to the generation-specific implementations. The adf_gen2_dc.c and adf_gen4_dc.c have been replaced by adf_dc.c, and the generation-specific logic has been reduced and moved to adf_gen2_hw_data.c and adf_gen4_hw_data.c. This does not introduce any functional change. Co-developed-by: Vijay Sundar Selvamani Signed-off-by: Vijay Sundar Selvamani Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- .../intel/qat/qat_420xx/adf_420xx_hw_data.c | 1 - .../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 1 - .../intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 - .../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 1 - .../intel/qat/qat_c62x/adf_c62x_hw_data.c | 1 - .../intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c | 1 - drivers/crypto/intel/qat/qat_common/Makefile | 3 +- .../intel/qat/qat_common/adf_accel_devices.h | 4 +- .../qat_common/{adf_gen2_dc.c => adf_dc.c} | 47 +++++------ drivers/crypto/intel/qat/qat_common/adf_dc.h | 17 ++++ .../crypto/intel/qat/qat_common/adf_gen2_dc.h | 10 --- .../intel/qat/qat_common/adf_gen2_hw_data.c | 57 +++++++++++++ .../intel/qat/qat_common/adf_gen2_hw_data.h | 1 + .../crypto/intel/qat/qat_common/adf_gen4_dc.c | 83 ------------------- .../crypto/intel/qat/qat_common/adf_gen4_dc.h | 10 --- .../intel/qat/qat_common/adf_gen4_hw_data.c | 70 ++++++++++++++++ .../intel/qat/qat_common/adf_gen4_hw_data.h | 2 + .../intel/qat/qat_common/qat_comp_algs.c | 5 +- .../intel/qat/qat_common/qat_compression.c | 1 - .../intel/qat/qat_common/qat_compression.h | 1 - .../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 - .../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 1 - 22 files changed, 173 insertions(+), 146 deletions(-) rename drivers/crypto/intel/qat/qat_common/{adf_gen2_dc.c => adf_dc.c} (61%) create mode 100644 drivers/crypto/intel/qat/qat_common/adf_dc.h delete mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen2_dc.h delete mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_dc.c delete mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_dc.h diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c index 5817b3164185..7c3c0f561c95 100644 --- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c index 2d89d4a3a7b9..bd0b1b1015c0 100644 --- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c index 9425af26d34c..07f2c42a68f5 100644 --- a/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c +++ b/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c @@ -5,7 +5,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c index f73d9a4a9ab7..db3c33fa1881 100644 --- a/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c +++ b/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c @@ -3,7 +3,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c index 1a2f36b603fb..0b410b41474d 100644 --- a/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c +++ b/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c @@ -5,7 +5,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c index 29e53b41a895..7f00035d3661 100644 --- a/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c +++ b/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c @@ -3,7 +3,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile index 0370eaad42b1..0a9da7398a78 100644 --- a/drivers/crypto/intel/qat/qat_common/Makefile +++ b/drivers/crypto/intel/qat/qat_common/Makefile @@ -8,13 +8,12 @@ intel_qat-y := adf_accel_engine.o \ adf_cfg_services.o \ adf_clock.o \ adf_ctl_drv.o \ + adf_dc.o \ adf_dev_mgr.o \ adf_gen2_config.o \ - adf_gen2_dc.o \ adf_gen2_hw_csr_data.o \ adf_gen2_hw_data.o \ adf_gen4_config.o \ - adf_gen4_dc.o \ adf_gen4_hw_csr_data.o \ adf_gen4_hw_data.o \ adf_gen4_pm.o \ diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h index 1e301a20c244..a39f506322f6 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h @@ -12,6 +12,7 @@ #include #include #include "adf_cfg_common.h" +#include "adf_dc.h" #include "adf_rl.h" #include "adf_telemetry.h" #include "adf_pfvf_msg.h" @@ -267,7 +268,8 @@ struct adf_pfvf_ops { }; struct adf_dc_ops { - void (*build_deflate_ctx)(void *ctx); + int (*build_comp_block)(void *ctx, enum adf_dc_algo algo); + int (*build_decomp_block)(void *ctx, enum adf_dc_algo algo); }; struct qat_migdev_ops { diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_dc.c b/drivers/crypto/intel/qat/qat_common/adf_dc.c similarity index 61% rename from drivers/crypto/intel/qat/qat_common/adf_gen2_dc.c rename to drivers/crypto/intel/qat/qat_common/adf_dc.c index 47261b1c1da6..4beb4b7dbf0e 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen2_dc.c +++ b/drivers/crypto/intel/qat/qat_common/adf_dc.c @@ -1,22 +1,21 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2022 Intel Corporation */ #include "adf_accel_devices.h" -#include "adf_gen2_dc.h" +#include "adf_dc.h" #include "icp_qat_fw_comp.h" -static void qat_comp_build_deflate_ctx(void *ctx) +int qat_comp_build_ctx(struct adf_accel_dev *accel_dev, void *ctx, enum adf_dc_algo algo) { - struct icp_qat_fw_comp_req *req_tmpl = (struct icp_qat_fw_comp_req *)ctx; - struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; - struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; - struct icp_qat_fw_comp_req_params *req_pars = &req_tmpl->comp_pars; + struct icp_qat_fw_comp_req *req_tmpl = ctx; struct icp_qat_fw_comp_cd_hdr *comp_cd_ctrl = &req_tmpl->comp_cd_ctrl; + struct icp_qat_fw_comp_req_params *req_pars = &req_tmpl->comp_pars; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + int ret; memset(req_tmpl, 0, sizeof(*req_tmpl)); header->hdr_flags = ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET); header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_COMP; - header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC; header->comn_req_flags = ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_SGL); @@ -26,12 +25,14 @@ static void qat_comp_build_deflate_ctx(void *ctx) ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST, ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST, ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF); - cd_pars->u.sl.comp_slice_cfg_word[0] = - ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(ICP_QAT_HW_COMPRESSION_DIR_COMPRESS, - ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED, - ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE, - ICP_QAT_HW_COMPRESSION_DEPTH_1, - ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); + + /* Build HW config block for compression */ + ret = GET_DC_OPS(accel_dev)->build_comp_block(ctx, algo); + if (ret) { + dev_err(&GET_DEV(accel_dev), "Failed to build compression block\n"); + return ret; + } + req_pars->crc.legacy.initial_adler = COMP_CPR_INITIAL_ADLER; req_pars->crc.legacy.initial_crc32 = COMP_CPR_INITIAL_CRC; req_pars->req_par_flags = @@ -52,19 +53,11 @@ static void qat_comp_build_deflate_ctx(void *ctx) /* Fill second half of the template for decompression */ memcpy(req_tmpl + 1, req_tmpl, sizeof(*req_tmpl)); req_tmpl++; - header = &req_tmpl->comn_hdr; - header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS; - cd_pars = &req_tmpl->cd_pars; - cd_pars->u.sl.comp_slice_cfg_word[0] = - ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS, - ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED, - ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE, - ICP_QAT_HW_COMPRESSION_DEPTH_1, - ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); -} -void adf_gen2_init_dc_ops(struct adf_dc_ops *dc_ops) -{ - dc_ops->build_deflate_ctx = qat_comp_build_deflate_ctx; + /* Build HW config block for decompression */ + ret = GET_DC_OPS(accel_dev)->build_decomp_block(req_tmpl, algo); + if (ret) + dev_err(&GET_DEV(accel_dev), "Failed to build decompression block\n"); + + return ret; } -EXPORT_SYMBOL_GPL(adf_gen2_init_dc_ops); diff --git a/drivers/crypto/intel/qat/qat_common/adf_dc.h b/drivers/crypto/intel/qat/qat_common/adf_dc.h new file mode 100644 index 000000000000..6cb5e09054a6 --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/adf_dc.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ADF_DC_H +#define ADF_DC_H + +struct adf_accel_dev; + +enum adf_dc_algo { + QAT_DEFLATE, + QAT_LZ4, + QAT_LZ4S, + QAT_ZSTD, +}; + +int qat_comp_build_ctx(struct adf_accel_dev *accel_dev, void *ctx, enum adf_dc_algo algo); + +#endif /* ADF_DC_H */ diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_dc.h b/drivers/crypto/intel/qat/qat_common/adf_gen2_dc.h deleted file mode 100644 index 6eae023354d7..000000000000 --- a/drivers/crypto/intel/qat/qat_common/adf_gen2_dc.h +++ /dev/null @@ -1,10 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright(c) 2022 Intel Corporation */ -#ifndef ADF_GEN2_DC_H -#define ADF_GEN2_DC_H - -#include "adf_accel_devices.h" - -void adf_gen2_init_dc_ops(struct adf_dc_ops *dc_ops); - -#endif /* ADF_GEN2_DC_H */ diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c index 2b263442c856..6a505e9a5cf9 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c +++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c @@ -1,7 +1,9 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2020 Intel Corporation */ #include "adf_common_drv.h" +#include "adf_dc.h" #include "adf_gen2_hw_data.h" +#include "icp_qat_fw_comp.h" #include "icp_qat_hw.h" #include @@ -169,3 +171,58 @@ void adf_gen2_set_ssm_wdtimer(struct adf_accel_dev *accel_dev) } } EXPORT_SYMBOL_GPL(adf_gen2_set_ssm_wdtimer); + +static int adf_gen2_build_comp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC; + break; + default: + return -EINVAL; + } + + cd_pars->u.sl.comp_slice_cfg_word[0] = + ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(ICP_QAT_HW_COMPRESSION_DIR_COMPRESS, + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED, + ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE, + ICP_QAT_HW_COMPRESSION_DEPTH_1, + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); + + return 0; +} + +static int adf_gen2_build_decomp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS; + break; + default: + return -EINVAL; + } + + cd_pars->u.sl.comp_slice_cfg_word[0] = + ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS, + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_DISABLED, + ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE, + ICP_QAT_HW_COMPRESSION_DEPTH_1, + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); + + return 0; +} + +void adf_gen2_init_dc_ops(struct adf_dc_ops *dc_ops) +{ + dc_ops->build_comp_block = adf_gen2_build_comp_block; + dc_ops->build_decomp_block = adf_gen2_build_decomp_block; +} +EXPORT_SYMBOL_GPL(adf_gen2_init_dc_ops); diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h index 708e9186127b..59bad368a921 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h +++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h @@ -88,5 +88,6 @@ void adf_gen2_get_arb_info(struct arb_info *arb_info); void adf_gen2_enable_ints(struct adf_accel_dev *accel_dev); u32 adf_gen2_get_accel_cap(struct adf_accel_dev *accel_dev); void adf_gen2_set_ssm_wdtimer(struct adf_accel_dev *accel_dev); +void adf_gen2_init_dc_ops(struct adf_dc_ops *dc_ops); #endif diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.c deleted file mode 100644 index 5859238e37de..000000000000 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.c +++ /dev/null @@ -1,83 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* Copyright(c) 2022 Intel Corporation */ -#include "adf_accel_devices.h" -#include "icp_qat_fw_comp.h" -#include "icp_qat_hw_20_comp.h" -#include "adf_gen4_dc.h" - -static void qat_comp_build_deflate(void *ctx) -{ - struct icp_qat_fw_comp_req *req_tmpl = - (struct icp_qat_fw_comp_req *)ctx; - struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; - struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; - struct icp_qat_fw_comp_req_params *req_pars = &req_tmpl->comp_pars; - struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr = {0}; - struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr = {0}; - struct icp_qat_hw_decomp_20_config_csr_lower hw_decomp_lower_csr = {0}; - u32 upper_val; - u32 lower_val; - - memset(req_tmpl, 0, sizeof(*req_tmpl)); - header->hdr_flags = - ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET); - header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_COMP; - header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC; - header->comn_req_flags = - ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, - QAT_COMN_PTR_TYPE_SGL); - header->serv_specif_flags = - ICP_QAT_FW_COMP_FLAGS_BUILD(ICP_QAT_FW_COMP_STATELESS_SESSION, - ICP_QAT_FW_COMP_AUTO_SELECT_BEST, - ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST, - ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST, - ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF); - hw_comp_lower_csr.skip_ctrl = ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL; - hw_comp_lower_csr.algo = ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77; - hw_comp_lower_csr.lllbd = ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED; - hw_comp_lower_csr.sd = ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1; - hw_comp_lower_csr.hash_update = ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW; - hw_comp_lower_csr.edmm = ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED; - hw_comp_upper_csr.nice = ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL; - hw_comp_upper_csr.lazy = ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL; - - upper_val = ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(hw_comp_upper_csr); - lower_val = ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(hw_comp_lower_csr); - - cd_pars->u.sl.comp_slice_cfg_word[0] = lower_val; - cd_pars->u.sl.comp_slice_cfg_word[1] = upper_val; - - req_pars->crc.legacy.initial_adler = COMP_CPR_INITIAL_ADLER; - req_pars->crc.legacy.initial_crc32 = COMP_CPR_INITIAL_CRC; - req_pars->req_par_flags = - ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(ICP_QAT_FW_COMP_SOP, - ICP_QAT_FW_COMP_EOP, - ICP_QAT_FW_COMP_BFINAL, - ICP_QAT_FW_COMP_CNV, - ICP_QAT_FW_COMP_CNV_RECOVERY, - ICP_QAT_FW_COMP_NO_CNV_DFX, - ICP_QAT_FW_COMP_CRC_MODE_LEGACY, - ICP_QAT_FW_COMP_NO_XXHASH_ACC, - ICP_QAT_FW_COMP_CNV_ERROR_NONE, - ICP_QAT_FW_COMP_NO_APPEND_CRC, - ICP_QAT_FW_COMP_NO_DROP_DATA); - - /* Fill second half of the template for decompression */ - memcpy(req_tmpl + 1, req_tmpl, sizeof(*req_tmpl)); - req_tmpl++; - header = &req_tmpl->comn_hdr; - header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS; - cd_pars = &req_tmpl->cd_pars; - - hw_decomp_lower_csr.algo = ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE; - lower_val = ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(hw_decomp_lower_csr); - - cd_pars->u.sl.comp_slice_cfg_word[0] = lower_val; - cd_pars->u.sl.comp_slice_cfg_word[1] = 0; -} - -void adf_gen4_init_dc_ops(struct adf_dc_ops *dc_ops) -{ - dc_ops->build_deflate_ctx = qat_comp_build_deflate; -} -EXPORT_SYMBOL_GPL(adf_gen4_init_dc_ops); diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.h deleted file mode 100644 index 0b1a6774412e..000000000000 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_dc.h +++ /dev/null @@ -1,10 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright(c) 2022 Intel Corporation */ -#ifndef ADF_GEN4_DC_H -#define ADF_GEN4_DC_H - -#include "adf_accel_devices.h" - -void adf_gen4_init_dc_ops(struct adf_dc_ops *dc_ops); - -#endif /* ADF_GEN4_DC_H */ diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c index 099949a2421c..0406cb09c5bb 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c +++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c @@ -9,6 +9,8 @@ #include "adf_fw_config.h" #include "adf_gen4_hw_data.h" #include "adf_gen4_pm.h" +#include "icp_qat_fw_comp.h" +#include "icp_qat_hw_20_comp.h" u32 adf_gen4_get_accel_mask(struct adf_hw_device_data *self) { @@ -663,3 +665,71 @@ int adf_gen4_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number return ret; } EXPORT_SYMBOL_GPL(adf_gen4_bank_state_restore); + +static int adf_gen4_build_comp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr = { }; + struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr = { }; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + u32 upper_val; + u32 lower_val; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DYNAMIC; + break; + default: + return -EINVAL; + } + + hw_comp_lower_csr.skip_ctrl = ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL; + hw_comp_lower_csr.algo = ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77; + hw_comp_lower_csr.lllbd = ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED; + hw_comp_lower_csr.sd = ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1; + hw_comp_lower_csr.hash_update = ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW; + hw_comp_lower_csr.edmm = ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED; + hw_comp_upper_csr.nice = ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL; + hw_comp_upper_csr.lazy = ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL; + + upper_val = ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(hw_comp_upper_csr); + lower_val = ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(hw_comp_lower_csr); + + cd_pars->u.sl.comp_slice_cfg_word[0] = lower_val; + cd_pars->u.sl.comp_slice_cfg_word[1] = upper_val; + + return 0; +} + +static int adf_gen4_build_decomp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_hw_decomp_20_config_csr_lower hw_decomp_lower_csr = { }; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + u32 lower_val; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS; + break; + default: + return -EINVAL; + } + + hw_decomp_lower_csr.algo = ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE; + lower_val = ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(hw_decomp_lower_csr); + + cd_pars->u.sl.comp_slice_cfg_word[0] = lower_val; + cd_pars->u.sl.comp_slice_cfg_word[1] = 0; + + return 0; +} + +void adf_gen4_init_dc_ops(struct adf_dc_ops *dc_ops) +{ + dc_ops->build_comp_block = adf_gen4_build_comp_block; + dc_ops->build_decomp_block = adf_gen4_build_decomp_block; +} +EXPORT_SYMBOL_GPL(adf_gen4_init_dc_ops); diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h index 51fc2eaa263e..e4f4d5fa616d 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h +++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h @@ -7,6 +7,7 @@ #include "adf_accel_devices.h" #include "adf_cfg_common.h" +#include "adf_dc.h" /* PCIe configuration space */ #define ADF_GEN4_BAR_MASK (BIT(0) | BIT(2) | BIT(4)) @@ -180,5 +181,6 @@ int adf_gen4_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number, int adf_gen4_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number, struct bank_state *state); bool adf_gen4_services_supported(unsigned long service_mask); +void adf_gen4_init_dc_ops(struct adf_dc_ops *dc_ops); #endif diff --git a/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c b/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c index a0a29b97a749..8b123472b71c 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c +++ b/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c @@ -8,6 +8,7 @@ #include #include "adf_accel_devices.h" #include "adf_common_drv.h" +#include "adf_dc.h" #include "qat_bl.h" #include "qat_comp_req.h" #include "qat_compression.h" @@ -145,9 +146,7 @@ static int qat_comp_alg_init_tfm(struct crypto_acomp *acomp_tfm) return -EINVAL; ctx->inst = inst; - ctx->inst->build_deflate_ctx(ctx->comp_ctx); - - return 0; + return qat_comp_build_ctx(inst->accel_dev, ctx->comp_ctx, QAT_DEFLATE); } static void qat_comp_alg_exit_tfm(struct crypto_acomp *acomp_tfm) diff --git a/drivers/crypto/intel/qat/qat_common/qat_compression.c b/drivers/crypto/intel/qat/qat_common/qat_compression.c index 7842a9f22178..c285b45b8679 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_compression.c +++ b/drivers/crypto/intel/qat/qat_common/qat_compression.c @@ -144,7 +144,6 @@ static int qat_compression_create_instances(struct adf_accel_dev *accel_dev) inst->id = i; atomic_set(&inst->refctr, 0); inst->accel_dev = accel_dev; - inst->build_deflate_ctx = GET_DC_OPS(accel_dev)->build_deflate_ctx; snprintf(key, sizeof(key), ADF_DC "%d" ADF_RING_DC_BANK_NUM, i); ret = adf_cfg_get_param_value(accel_dev, SEC, key, val); diff --git a/drivers/crypto/intel/qat/qat_common/qat_compression.h b/drivers/crypto/intel/qat/qat_common/qat_compression.h index aebac2302dcf..5ced3ed0e5ea 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_compression.h +++ b/drivers/crypto/intel/qat/qat_common/qat_compression.h @@ -20,7 +20,6 @@ struct qat_compression_instance { atomic_t refctr; struct qat_instance_backlog backlog; struct adf_dc_data *dc_data; - void (*build_deflate_ctx)(void *ctx); }; static inline bool adf_hw_dev_has_compression(struct adf_accel_dev *accel_dev) diff --git a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c index bf9e8f34f451..5b4bd0ba1ccb 100644 --- a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c +++ b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c @@ -4,7 +4,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c index bc59c1473eef..828456c43b76 100644 --- a/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c +++ b/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c @@ -3,7 +3,6 @@ #include #include #include -#include #include #include #include From patchwork Wed Apr 30 11:34:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886558 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6978E25A355 for ; Wed, 30 Apr 2025 11:35:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012910; cv=none; b=FgnifKBdd102PrCEB6Sv5/oIWODiyUt3jiQAQsGNDEfmWmqKAK7sKTFnt18KGVfSB694RtQ09MMA8ZOiTkRcVGquS2VtcCDN1wp6WI0KnB8IYonLTI/LJhEFqjO9VOaPC5BnRCX9qCwW8F4VAyRBaw9HCAxTW1PkhinCexO7P7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012910; c=relaxed/simple; bh=+hpRPTENswjPEWZEBdz7N9UjqWVhA28A6MWXxWBWots=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WpNIos9mMUeJZ4ew9HsokF29ReGdGY2M1cUpDQuTnAxMKpjr+xoo3FibpQEBfVqncB8VprkBTRQ2Y4KGr4h3lPPla22DF6mbUg4jr0oEEVnGu0fsth9DjNGTn9bbxVkVvUuoyKnmnazhNqg9KT58qCuKhSiG+f5AsDjAb+6YH3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Xpy3p/QD; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Xpy3p/QD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012908; x=1777548908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+hpRPTENswjPEWZEBdz7N9UjqWVhA28A6MWXxWBWots=; b=Xpy3p/QDG5XVMVCJWW3mRGg2mMUJ51hmfo9qiBMFjLNA7c2dOvlIgXql wiW7et4LpWQL+p+Yw/iCBfrgFIbsNREksu04jJL2PhTPbLUSVFbmqq2mh jm3ASVy9NemLCn0KvAo6rtzT3SSovAJ5DZl43v18hgKe21eKAaCtkhkTZ HVY9o8bjERvvfbhWBXBLIscIs2c8A7IKnqPa28DnDGTpLI559iwlkg9X4 UmfdpBSM5Os/npLxd8WC6Nc3CSzyrf4/8TGyCWAuifTdIFS9s5hVlPoby ZTxQStMIDGvLGJvjUa7VVp0Rnd24r1RzL79zEYCo++2rvcj2SOhVOcYCG Q==; X-CSE-ConnectionGUID: Nw7tma2hQ6+8JxUKZgGG/g== X-CSE-MsgGUID: JkVq2mpDQ2WbysjmI4ucEA== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331137" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331137" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:08 -0700 X-CSE-ConnectionGUID: aeWfzo9WSK+aPa2Atrzu0w== X-CSE-MsgGUID: rySMgTLCSEm1d4OYPhjHDA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812528" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:06 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 03/11] crypto: qat - use pr_fmt() in qat uclo.c Date: Wed, 30 Apr 2025 12:34:45 +0100 Message-Id: <20250430113453.1587497-4-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add pr_fmt() to qat uclo.c logging and update the debug and error messages to utilize it accordingly. This does not introduce any functional changes. Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- .../crypto/intel/qat/qat_common/qat_uclo.c | 135 +++++++++--------- 1 file changed, 65 insertions(+), 70 deletions(-) diff --git a/drivers/crypto/intel/qat/qat_common/qat_uclo.c b/drivers/crypto/intel/qat/qat_common/qat_uclo.c index 620300e70238..9dc26f5d2d01 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_uclo.c +++ b/drivers/crypto/intel/qat/qat_common/qat_uclo.c @@ -1,5 +1,8 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2014 - 2020 Intel Corporation */ + +#define pr_fmt(fmt) "QAT: " fmt + #include #include #include @@ -60,7 +63,7 @@ static int qat_uclo_free_ae_data(struct icp_qat_uclo_aedata *ae_data) unsigned int i; if (!ae_data) { - pr_err("QAT: bad argument, ae_data is NULL\n"); + pr_err("bad argument, ae_data is NULL\n"); return -EINVAL; } @@ -87,12 +90,11 @@ static int qat_uclo_check_uof_format(struct icp_qat_uof_filehdr *hdr) int min = hdr->min_ver & 0xff; if (hdr->file_id != ICP_QAT_UOF_FID) { - pr_err("QAT: Invalid header 0x%x\n", hdr->file_id); + pr_err("Invalid header 0x%x\n", hdr->file_id); return -EINVAL; } if (min != ICP_QAT_UOF_MINVER || maj != ICP_QAT_UOF_MAJVER) { - pr_err("QAT: bad UOF version, major 0x%x, minor 0x%x\n", - maj, min); + pr_err("bad UOF version, major 0x%x, minor 0x%x\n", maj, min); return -EINVAL; } return 0; @@ -104,20 +106,19 @@ static int qat_uclo_check_suof_format(struct icp_qat_suof_filehdr *suof_hdr) int min = suof_hdr->min_ver & 0xff; if (suof_hdr->file_id != ICP_QAT_SUOF_FID) { - pr_err("QAT: invalid header 0x%x\n", suof_hdr->file_id); + pr_err("invalid header 0x%x\n", suof_hdr->file_id); return -EINVAL; } if (suof_hdr->fw_type != 0) { - pr_err("QAT: unsupported firmware type\n"); + pr_err("unsupported firmware type\n"); return -EINVAL; } if (suof_hdr->num_chunks <= 0x1) { - pr_err("QAT: SUOF chunk amount is incorrect\n"); + pr_err("SUOF chunk amount is incorrect\n"); return -EINVAL; } if (maj != ICP_QAT_SUOF_MAJVER || min != ICP_QAT_SUOF_MINVER) { - pr_err("QAT: bad SUOF version, major 0x%x, minor 0x%x\n", - maj, min); + pr_err("bad SUOF version, major 0x%x, minor 0x%x\n", maj, min); return -EINVAL; } return 0; @@ -224,24 +225,24 @@ static int qat_uclo_fetch_initmem_ae(struct icp_qat_fw_loader_handle *handle, char *str; if ((init_mem->addr + init_mem->num_in_bytes) > (size_range << 0x2)) { - pr_err("QAT: initmem is out of range"); + pr_err("initmem is out of range"); return -EINVAL; } if (init_mem->scope != ICP_QAT_UOF_LOCAL_SCOPE) { - pr_err("QAT: Memory scope for init_mem error\n"); + pr_err("Memory scope for init_mem error\n"); return -EINVAL; } str = qat_uclo_get_string(&obj_handle->str_table, init_mem->sym_name); if (!str) { - pr_err("QAT: AE name assigned in UOF init table is NULL\n"); + pr_err("AE name assigned in UOF init table is NULL\n"); return -EINVAL; } if (qat_uclo_parse_num(str, ae)) { - pr_err("QAT: Parse num for AE number failed\n"); + pr_err("Parse num for AE number failed\n"); return -EINVAL; } if (*ae >= ICP_QAT_UCLO_MAX_AE) { - pr_err("QAT: ae %d out of range\n", *ae); + pr_err("ae %d out of range\n", *ae); return -EINVAL; } return 0; @@ -357,8 +358,7 @@ static int qat_uclo_init_ae_memory(struct icp_qat_fw_loader_handle *handle, return -EINVAL; break; default: - pr_err("QAT: initmem region error. region type=0x%x\n", - init_mem->region); + pr_err("initmem region error. region type=0x%x\n", init_mem->region); return -EINVAL; } return 0; @@ -432,7 +432,7 @@ static int qat_uclo_init_memory(struct icp_qat_fw_loader_handle *handle) for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { if (qat_hal_batch_wr_lm(handle, ae, obj_handle->lm_init_tab[ae])) { - pr_err("QAT: fail to batch init lmem for AE %d\n", ae); + pr_err("fail to batch init lmem for AE %d\n", ae); return -EINVAL; } qat_uclo_cleanup_batch_init_list(handle, @@ -540,26 +540,26 @@ qat_uclo_check_image_compat(struct icp_qat_uof_encap_obj *encap_uof_obj, code_page->imp_expr_tab_offset); if (uc_var_tab->entry_num || imp_var_tab->entry_num || imp_expr_tab->entry_num) { - pr_err("QAT: UOF can't contain imported variable to be parsed\n"); + pr_err("UOF can't contain imported variable to be parsed\n"); return -EINVAL; } neigh_reg_tab = (struct icp_qat_uof_objtable *) (encap_uof_obj->beg_uof + code_page->neigh_reg_tab_offset); if (neigh_reg_tab->entry_num) { - pr_err("QAT: UOF can't contain neighbor register table\n"); + pr_err("UOF can't contain neighbor register table\n"); return -EINVAL; } if (image->numpages > 1) { - pr_err("QAT: UOF can't contain multiple pages\n"); + pr_err("UOF can't contain multiple pages\n"); return -EINVAL; } if (ICP_QAT_SHARED_USTORE_MODE(image->ae_mode)) { - pr_err("QAT: UOF can't use shared control store feature\n"); + pr_err("UOF can't use shared control store feature\n"); return -EFAULT; } if (RELOADABLE_CTX_SHARED_MODE(image->ae_mode)) { - pr_err("QAT: UOF can't use reloadable feature\n"); + pr_err("UOF can't use reloadable feature\n"); return -EFAULT; } return 0; @@ -678,7 +678,7 @@ static int qat_uclo_map_ae(struct icp_qat_fw_loader_handle *handle, int max_ae) } } if (!mflag) { - pr_err("QAT: uimage uses AE not set\n"); + pr_err("uimage uses AE not set\n"); return -EINVAL; } return 0; @@ -738,8 +738,7 @@ qat_uclo_get_dev_type(struct icp_qat_fw_loader_handle *handle) case PCI_DEVICE_ID_INTEL_QAT_420XX: return ICP_QAT_AC_4XXX_A_DEV_TYPE; default: - pr_err("QAT: unsupported device 0x%x\n", - handle->pci_dev->device); + pr_err("unsupported device 0x%x\n", handle->pci_dev->device); return 0; } } @@ -749,7 +748,7 @@ static int qat_uclo_check_uof_compat(struct icp_qat_uclo_objhandle *obj_handle) unsigned int maj_ver, prod_type = obj_handle->prod_type; if (!(prod_type & obj_handle->encap_uof_obj.obj_hdr->ac_dev_type)) { - pr_err("QAT: UOF type 0x%x doesn't match with platform 0x%x\n", + pr_err("UOF type 0x%x doesn't match with platform 0x%x\n", obj_handle->encap_uof_obj.obj_hdr->ac_dev_type, prod_type); return -EINVAL; @@ -757,7 +756,7 @@ static int qat_uclo_check_uof_compat(struct icp_qat_uclo_objhandle *obj_handle) maj_ver = obj_handle->prod_rev & 0xff; if (obj_handle->encap_uof_obj.obj_hdr->max_cpu_ver < maj_ver || obj_handle->encap_uof_obj.obj_hdr->min_cpu_ver > maj_ver) { - pr_err("QAT: UOF majVer 0x%x out of range\n", maj_ver); + pr_err("UOF majVer 0x%x out of range\n", maj_ver); return -EINVAL; } return 0; @@ -800,7 +799,7 @@ static int qat_uclo_init_reg(struct icp_qat_fw_loader_handle *handle, case ICP_NEIGH_REL: return qat_hal_init_nn(handle, ae, ctx_mask, reg_addr, value); default: - pr_err("QAT: UOF uses not supported reg type 0x%x\n", reg_type); + pr_err("UOF uses not supported reg type 0x%x\n", reg_type); return -EFAULT; } return 0; @@ -836,8 +835,7 @@ static int qat_uclo_init_reg_sym(struct icp_qat_fw_loader_handle *handle, case ICP_QAT_UOF_INIT_REG_CTX: /* check if ctx is appropriate for the ctxMode */ if (!((1 << init_regsym->ctx) & ctx_mask)) { - pr_err("QAT: invalid ctx num = 0x%x\n", - init_regsym->ctx); + pr_err("invalid ctx num = 0x%x\n", init_regsym->ctx); return -EINVAL; } qat_uclo_init_reg(handle, ae, @@ -849,10 +847,10 @@ static int qat_uclo_init_reg_sym(struct icp_qat_fw_loader_handle *handle, exp_res); break; case ICP_QAT_UOF_INIT_EXPR: - pr_err("QAT: INIT_EXPR feature not supported\n"); + pr_err("INIT_EXPR feature not supported\n"); return -EINVAL; case ICP_QAT_UOF_INIT_EXPR_ENDIAN_SWAP: - pr_err("QAT: INIT_EXPR_ENDIAN_SWAP feature not supported\n"); + pr_err("INIT_EXPR_ENDIAN_SWAP feature not supported\n"); return -EINVAL; default: break; @@ -872,7 +870,7 @@ static int qat_uclo_init_globals(struct icp_qat_fw_loader_handle *handle) return 0; if (obj_handle->init_mem_tab.entry_num) { if (qat_uclo_init_memory(handle)) { - pr_err("QAT: initialize memory failed\n"); + pr_err("initialize memory failed\n"); return -EINVAL; } } @@ -901,40 +899,40 @@ static int qat_hal_set_modes(struct icp_qat_fw_loader_handle *handle, mode = ICP_QAT_CTX_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_ctx_mode(handle, ae, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_ctx_mode error\n"); + pr_err("qat_hal_set_ae_ctx_mode error\n"); return ret; } if (handle->chip_info->nn) { mode = ICP_QAT_NN_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_nn_mode(handle, ae, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_nn_mode error\n"); + pr_err("qat_hal_set_ae_nn_mode error\n"); return ret; } } mode = ICP_QAT_LOC_MEM0_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM0, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_lm_mode LMEM0 error\n"); + pr_err("qat_hal_set_ae_lm_mode LMEM0 error\n"); return ret; } mode = ICP_QAT_LOC_MEM1_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM1, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_lm_mode LMEM1 error\n"); + pr_err("qat_hal_set_ae_lm_mode LMEM1 error\n"); return ret; } if (handle->chip_info->lm2lm3) { mode = ICP_QAT_LOC_MEM2_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM2, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_lm_mode LMEM2 error\n"); + pr_err("qat_hal_set_ae_lm_mode LMEM2 error\n"); return ret; } mode = ICP_QAT_LOC_MEM3_MODE(uof_image->ae_mode); ret = qat_hal_set_ae_lm_mode(handle, ae, ICP_LMEM3, mode); if (ret) { - pr_err("QAT: qat_hal_set_ae_lm_mode LMEM3 error\n"); + pr_err("qat_hal_set_ae_lm_mode LMEM3 error\n"); return ret; } mode = ICP_QAT_LOC_TINDEX_MODE(uof_image->ae_mode); @@ -998,7 +996,7 @@ static int qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) obj_handle->prod_rev = PID_MAJOR_REV | (PID_MINOR_REV & handle->hal_handle->revision_id); if (qat_uclo_check_uof_compat(obj_handle)) { - pr_err("QAT: UOF incompatible\n"); + pr_err("UOF incompatible\n"); return -EINVAL; } obj_handle->uword_buf = kcalloc(UWORD_CPYBUF_SIZE, sizeof(u64), @@ -1009,7 +1007,7 @@ static int qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) if (!obj_handle->obj_hdr->file_buff || !qat_uclo_map_str_table(obj_handle->obj_hdr, ICP_QAT_UOF_STRT, &obj_handle->str_table)) { - pr_err("QAT: UOF doesn't have effective images\n"); + pr_err("UOF doesn't have effective images\n"); goto out_err; } obj_handle->uimage_num = @@ -1018,7 +1016,7 @@ static int qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) if (!obj_handle->uimage_num) goto out_err; if (qat_uclo_map_ae(handle, handle->hal_handle->ae_max_num)) { - pr_err("QAT: Bad object\n"); + pr_err("Bad object\n"); goto out_check_uof_aemask_err; } qat_uclo_init_uword_num(handle); @@ -1051,7 +1049,7 @@ static int qat_uclo_map_suof_file_hdr(struct icp_qat_fw_loader_handle *handle, check_sum = qat_uclo_calc_str_checksum((char *)&suof_ptr->min_ver, min_ver_offset); if (check_sum != suof_ptr->check_sum) { - pr_err("QAT: incorrect SUOF checksum\n"); + pr_err("incorrect SUOF checksum\n"); return -EINVAL; } suof_handle->check_sum = suof_ptr->check_sum; @@ -1113,14 +1111,13 @@ static int qat_uclo_check_simg_compat(struct icp_qat_fw_loader_handle *handle, prod_rev = PID_MAJOR_REV | (PID_MINOR_REV & handle->hal_handle->revision_id); if (img_ae_mode->dev_type != prod_type) { - pr_err("QAT: incompatible product type %x\n", - img_ae_mode->dev_type); + pr_err("incompatible product type %x\n", img_ae_mode->dev_type); return -EINVAL; } maj_ver = prod_rev & 0xff; if (maj_ver > img_ae_mode->devmax_ver || maj_ver < img_ae_mode->devmin_ver) { - pr_err("QAT: incompatible device majver 0x%x\n", maj_ver); + pr_err("incompatible device majver 0x%x\n", maj_ver); return -EINVAL; } return 0; @@ -1163,7 +1160,7 @@ static int qat_uclo_map_suof(struct icp_qat_fw_loader_handle *handle, struct icp_qat_suof_img_hdr img_header; if (!suof_ptr || suof_size == 0) { - pr_err("QAT: input parameter SUOF pointer/size is NULL\n"); + pr_err("input parameter SUOF pointer/size is NULL\n"); return -EINVAL; } if (qat_uclo_check_suof_format(suof_ptr)) @@ -1237,7 +1234,7 @@ static int qat_uclo_auth_fw(struct icp_qat_fw_loader_handle *handle, return 0; } while (retry++ < FW_AUTH_MAX_RETRY); auth_fail: - pr_err("QAT: authentication error (FCU_STATUS = 0x%x),retry = %d\n", + pr_err("authentication error (FCU_STATUS = 0x%x),retry = %d\n", fcu_sts & FCU_AUTH_STS_MASK, retry); return -EINVAL; } @@ -1273,14 +1270,13 @@ static int qat_uclo_broadcast_load_fw(struct icp_qat_fw_loader_handle *handle, fcu_sts_csr = handle->chip_info->fcu_sts_csr; fcu_loaded_csr = handle->chip_info->fcu_loaded_ae_csr; } else { - pr_err("Chip 0x%x doesn't support broadcast load\n", - handle->pci_dev->device); + pr_err("Chip 0x%x doesn't support broadcast load\n", handle->pci_dev->device); return -EINVAL; } for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { if (qat_hal_check_ae_active(handle, (unsigned char)ae)) { - pr_err("QAT: Broadcast load failed. AE is not enabled or active.\n"); + pr_err("Broadcast load failed. AE is not enabled or active.\n"); return -EINVAL; } @@ -1312,7 +1308,7 @@ static int qat_uclo_broadcast_load_fw(struct icp_qat_fw_loader_handle *handle, } while (retry++ < FW_AUTH_MAX_RETRY); if (retry > FW_AUTH_MAX_RETRY) { - pr_err("QAT: broadcast load failed timeout %d\n", retry); + pr_err("broadcast load failed timeout %d\n", retry); return -EINVAL; } } @@ -1397,13 +1393,13 @@ static int qat_uclo_check_image(struct icp_qat_fw_loader_handle *handle, if (size > ICP_QAT_CSS_RSA3K_MAX_IMAGE_LEN) goto err; } else { - pr_err("QAT: Unsupported firmware type\n"); + pr_err("Unsupported firmware type\n"); return -EINVAL; } return 0; err: - pr_err("QAT: Invalid %s firmware image\n", fw_type_name); + pr_err("Invalid %s firmware image\n", fw_type_name); return -EINVAL; } @@ -1422,12 +1418,12 @@ static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, ret = qat_uclo_simg_alloc(handle, &img_desc, ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN); if (ret) { - pr_err("QAT: error, allocate continuous dram fail\n"); + pr_err("error, allocate continuous dram fail\n"); return ret; } if (!IS_ALIGNED(img_desc.dram_size, 8) || !img_desc.dram_bus_addr) { - pr_debug("QAT: invalid address\n"); + pr_debug("invalid address\n"); qat_uclo_simg_free(handle, &img_desc); return -EINVAL; } @@ -1489,7 +1485,7 @@ static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, auth_desc->img_len = size - ICP_QAT_AE_IMG_OFFSET(handle); if (bus_addr + auth_desc->img_len > img_desc.dram_bus_addr + ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN) { - pr_err("QAT: insufficient memory size for authentication data\n"); + pr_err("insufficient memory size for authentication data\n"); qat_uclo_simg_free(handle, &img_desc); return -ENOMEM; } @@ -1546,7 +1542,7 @@ static int qat_uclo_load_fw(struct icp_qat_fw_loader_handle *handle, if (!((desc->ae_mask >> i) & 0x1)) continue; if (qat_hal_check_ae_active(handle, i)) { - pr_err("QAT: AE %d is active\n", i); + pr_err("AE %d is active\n", i); return -EINVAL; } SET_CAP_CSR(handle, fcu_ctl_csr, @@ -1566,7 +1562,7 @@ static int qat_uclo_load_fw(struct icp_qat_fw_loader_handle *handle, } } while (retry++ < FW_AUTH_MAX_RETRY); if (retry > FW_AUTH_MAX_RETRY) { - pr_err("QAT: firmware load failed timeout %x\n", retry); + pr_err("firmware load failed timeout %x\n", retry); return -EINVAL; } } @@ -1584,7 +1580,7 @@ static int qat_uclo_map_suof_obj(struct icp_qat_fw_loader_handle *handle, handle->sobj_handle = suof_handle; if (qat_uclo_map_suof(handle, addr_ptr, mem_size)) { qat_uclo_del_suof(handle); - pr_err("QAT: map SUOF failed\n"); + pr_err("map SUOF failed\n"); return -EINVAL; } return 0; @@ -1608,7 +1604,7 @@ int qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle, qat_uclo_ummap_auth_fw(handle, &desc); } else { if (handle->chip_info->mmp_sram_size < mem_size) { - pr_err("QAT: MMP size is too large: 0x%x\n", mem_size); + pr_err("MMP size is too large: 0x%x\n", mem_size); return -EFBIG; } qat_uclo_wr_sram_by_words(handle, 0, addr_ptr, mem_size); @@ -1634,7 +1630,7 @@ static int qat_uclo_map_uof_obj(struct icp_qat_fw_loader_handle *handle, objhdl->obj_hdr = qat_uclo_map_chunk((char *)objhdl->obj_buf, filehdr, ICP_QAT_UOF_OBJS); if (!objhdl->obj_hdr) { - pr_err("QAT: object file chunk is null\n"); + pr_err("object file chunk is null\n"); goto out_objhdr_err; } handle->obj_handle = objhdl; @@ -1669,7 +1665,7 @@ static int qat_uclo_map_mof_file_hdr(struct icp_qat_fw_loader_handle *handle, checksum = qat_uclo_calc_str_checksum(&mof_ptr->min_ver, min_ver_offset); if (checksum != mof_ptr->checksum) { - pr_err("QAT: incorrect MOF checksum\n"); + pr_err("incorrect MOF checksum\n"); return -EINVAL; } @@ -1705,7 +1701,7 @@ static int qat_uclo_seek_obj_inside_mof(struct icp_qat_mof_handle *mobj_handle, } } - pr_err("QAT: object %s is not found inside MOF\n", obj_name); + pr_err("object %s is not found inside MOF\n", obj_name); return -EINVAL; } @@ -1722,7 +1718,7 @@ static int qat_uclo_map_obj_from_mof(struct icp_qat_mof_handle *mobj_handle, ICP_QAT_MOF_OBJ_CHUNKID_LEN)) { obj = mobj_handle->sobjs_hdr + obj_chunkhdr->offset; } else { - pr_err("QAT: unsupported chunk id\n"); + pr_err("unsupported chunk id\n"); return -EINVAL; } mobj_hdr->obj_buf = obj; @@ -1783,7 +1779,7 @@ static int qat_uclo_map_objs_from_mof(struct icp_qat_mof_handle *mobj_handle) } if ((uobj_chunk_num + sobj_chunk_num) != *valid_chunk) { - pr_err("QAT: inconsistent UOF/SUOF chunk amount\n"); + pr_err("inconsistent UOF/SUOF chunk amount\n"); return -EINVAL; } return 0; @@ -1824,17 +1820,16 @@ static int qat_uclo_check_mof_format(struct icp_qat_mof_file_hdr *mof_hdr) int min = mof_hdr->min_ver & 0xff; if (mof_hdr->file_id != ICP_QAT_MOF_FID) { - pr_err("QAT: invalid header 0x%x\n", mof_hdr->file_id); + pr_err("invalid header 0x%x\n", mof_hdr->file_id); return -EINVAL; } if (mof_hdr->num_chunks <= 0x1) { - pr_err("QAT: MOF chunk amount is incorrect\n"); + pr_err("MOF chunk amount is incorrect\n"); return -EINVAL; } if (maj != ICP_QAT_MOF_MAJVER || min != ICP_QAT_MOF_MINVER) { - pr_err("QAT: bad MOF version, major 0x%x, minor 0x%x\n", - maj, min); + pr_err("bad MOF version, major 0x%x, minor 0x%x\n", maj, min); return -EINVAL; } return 0; From patchwork Wed Apr 30 11:34:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886179 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87CF925A630 for ; Wed, 30 Apr 2025 11:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012912; cv=none; b=jwJhwzgpJaBGIcsll6RfS5GPCkhFSuyIFyRXgh7q3BLPyx246RYiapcsaJ2pTJLmfgbbfFueiw6JT63klj7Hn/MtZcy1kh2pLjwVuqcjjVbLZkIhRF1xveCchGVTwTP3OhRit3uN3tf9ctFCqrmDHHy2JdayAyspaPMMCgDt734= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012912; c=relaxed/simple; bh=giGlaQyaAsE/jquEcnICW2V7Ki+PiVnr6kJifTwfMWk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TaMUd+DtX5O1VJNMUOldZVk8wbnsp4WLr7fyG4DIxWmbLz1wmsMQOKmubhQb8z5JAj/pTyjszrBaNumDBf6Sq2x2QB9zGoWR6aYXn6bfutFgVqucx9V1tTDpGTy/MduaxR1nEVfqMYyk7v89fAA77XBDdJy+70odQphnmk4gLFw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hGJi7rfI; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hGJi7rfI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012910; x=1777548910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=giGlaQyaAsE/jquEcnICW2V7Ki+PiVnr6kJifTwfMWk=; b=hGJi7rfIjnlqv2WCrEtuaIANvpeiSoNOCO4ib4jCrFFNCUqJr8sefEbm B6vhDLMQGa4+/FK+SK4r7EdBsU24MFbSNJRH8fiVhy/BErZ28dK8WQ3sZ Eyt6i4yi7h/qimanV0kRlgAn3ODRnzNroSaahbkd0OGHwMy79Bn9DHUvv hSMsGE1WQY1ET86vxDXZ5Gkp1HzUrVKPjT5WM947IQdcO7DDd5iOWTRwf 7e5PED9vQq+p3P5yYL96o7UtznMeoNcKtflqaCwzdkokFwnTisbp0gqKN nItzXf194UKx0suF+K8p9kd8ke3E8BieRoIQCy5DJ3QkGkTaWvrJiB33/ Q==; X-CSE-ConnectionGUID: +MMP1KKhSSqP8UUF+uGB5A== X-CSE-MsgGUID: EkAZObzERa2XArJUUmLwyA== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331144" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331144" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:10 -0700 X-CSE-ConnectionGUID: +91lUjGMQq+t+Aywo/M03g== X-CSE-MsgGUID: 24OgB6TeSBqPICbA0/mWyw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812532" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:08 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 04/11] crypto: qat - refactor FW signing algorithm Date: Wed, 30 Apr 2025 12:34:46 +0100 Message-Id: <20250430113453.1587497-5-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jack Xu The current implementation is designed to support single FW signing authentication only. Refactor the implementation to support other FW signing methods. This does not include any functional change. Co-developed-by: Suman Kumar Chakraborty Signed-off-by: Suman Kumar Chakraborty Signed-off-by: Jack Xu Reviewed-by: Giovanni Cabiddu --- .../crypto/intel/qat/qat_common/qat_uclo.c | 154 ++++++++++-------- 1 file changed, 84 insertions(+), 70 deletions(-) diff --git a/drivers/crypto/intel/qat/qat_common/qat_uclo.c b/drivers/crypto/intel/qat/qat_common/qat_uclo.c index 9dc26f5d2d01..d7f2ceb81f1f 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_uclo.c +++ b/drivers/crypto/intel/qat/qat_common/qat_uclo.c @@ -1033,6 +1033,23 @@ static int qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) return -EFAULT; } +static unsigned int qat_uclo_simg_hdr2sign_len(struct icp_qat_fw_loader_handle *handle) +{ + return ICP_QAT_AE_IMG_OFFSET(handle); +} + +static unsigned int qat_uclo_simg_hdr2cont_len(struct icp_qat_fw_loader_handle *handle) +{ + return ICP_QAT_AE_IMG_OFFSET(handle); +} + +static unsigned int qat_uclo_simg_fw_type(struct icp_qat_fw_loader_handle *handle, void *img_ptr) +{ + struct icp_qat_css_hdr *hdr = img_ptr; + + return hdr->fw_type; +} + static int qat_uclo_map_suof_file_hdr(struct icp_qat_fw_loader_handle *handle, struct icp_qat_suof_filehdr *suof_ptr, int suof_size) @@ -1064,9 +1081,9 @@ static void qat_uclo_map_simg(struct icp_qat_fw_loader_handle *handle, struct icp_qat_suof_chunk_hdr *suof_chunk_hdr) { struct icp_qat_suof_handle *suof_handle = handle->sobj_handle; - unsigned int offset = ICP_QAT_AE_IMG_OFFSET(handle); - struct icp_qat_simg_ae_mode *ae_mode; + unsigned int offset = qat_uclo_simg_hdr2cont_len(handle); struct icp_qat_suof_objhdr *suof_objhdr; + struct icp_qat_simg_ae_mode *ae_mode; suof_img_hdr->simg_buf = (suof_handle->suof_buf + suof_chunk_hdr->offset + @@ -1362,21 +1379,24 @@ static void qat_uclo_ummap_auth_fw(struct icp_qat_fw_loader_handle *handle, } static int qat_uclo_check_image(struct icp_qat_fw_loader_handle *handle, - char *image, unsigned int size, + void *image, unsigned int size, unsigned int fw_type) { char *fw_type_name = fw_type ? "MMP" : "AE"; unsigned int css_dword_size = sizeof(u32); + unsigned int header_len, simg_type; + struct icp_qat_css_hdr *css_hdr; if (handle->chip_info->fw_auth) { - struct icp_qat_css_hdr *css_hdr = (struct icp_qat_css_hdr *)image; - unsigned int header_len = ICP_QAT_AE_IMG_OFFSET(handle); + header_len = qat_uclo_simg_hdr2sign_len(handle); + simg_type = qat_uclo_simg_fw_type(handle, image); + css_hdr = image; if ((css_hdr->header_len * css_dword_size) != header_len) goto err; if ((css_hdr->size * css_dword_size) != size) goto err; - if (fw_type != css_hdr->fw_type) + if (fw_type != simg_type) goto err; if (size <= header_len) goto err; @@ -1403,113 +1423,85 @@ static int qat_uclo_check_image(struct icp_qat_fw_loader_handle *handle, return -EINVAL; } -static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, - char *image, unsigned int size, - struct icp_qat_fw_auth_desc **desc) +static int qat_uclo_build_auth_desc_RSA(struct icp_qat_fw_loader_handle *handle, + char *image, unsigned int size, + struct icp_firml_dram_desc *dram_desc, + unsigned int fw_type, struct icp_qat_fw_auth_desc **desc) { struct icp_qat_css_hdr *css_hdr = (struct icp_qat_css_hdr *)image; - struct icp_qat_fw_auth_desc *auth_desc; - struct icp_qat_auth_chunk *auth_chunk; - u64 virt_addr, bus_addr, virt_base; - unsigned int simg_offset = sizeof(*auth_chunk); struct icp_qat_simg_ae_mode *simg_ae_mode; - struct icp_firml_dram_desc img_desc; - int ret; + struct icp_qat_fw_auth_desc *auth_desc; + char *virt_addr, *virt_base; + u64 bus_addr; - ret = qat_uclo_simg_alloc(handle, &img_desc, ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN); - if (ret) { - pr_err("error, allocate continuous dram fail\n"); - return ret; - } - - if (!IS_ALIGNED(img_desc.dram_size, 8) || !img_desc.dram_bus_addr) { - pr_debug("invalid address\n"); - qat_uclo_simg_free(handle, &img_desc); - return -EINVAL; - } - - auth_chunk = img_desc.dram_base_addr_v; - auth_chunk->chunk_size = img_desc.dram_size; - auth_chunk->chunk_bus_addr = img_desc.dram_bus_addr; - virt_base = (uintptr_t)img_desc.dram_base_addr_v + simg_offset; - bus_addr = img_desc.dram_bus_addr + simg_offset; - auth_desc = img_desc.dram_base_addr_v; - auth_desc->css_hdr_high = (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->css_hdr_low = (unsigned int)bus_addr; + virt_base = dram_desc->dram_base_addr_v; + virt_base += sizeof(struct icp_qat_auth_chunk); + bus_addr = dram_desc->dram_bus_addr + sizeof(struct icp_qat_auth_chunk); + auth_desc = dram_desc->dram_base_addr_v; + auth_desc->css_hdr_high = upper_32_bits(bus_addr); + auth_desc->css_hdr_low = lower_32_bits(bus_addr); virt_addr = virt_base; - memcpy((void *)(uintptr_t)virt_addr, image, sizeof(*css_hdr)); + memcpy(virt_addr, image, sizeof(*css_hdr)); /* pub key */ bus_addr = ADD_ADDR(auth_desc->css_hdr_high, auth_desc->css_hdr_low) + sizeof(*css_hdr); virt_addr = virt_addr + sizeof(*css_hdr); - auth_desc->fwsk_pub_high = (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->fwsk_pub_low = (unsigned int)bus_addr; + auth_desc->fwsk_pub_high = upper_32_bits(bus_addr); + auth_desc->fwsk_pub_low = lower_32_bits(bus_addr); - memcpy((void *)(uintptr_t)virt_addr, - (void *)(image + sizeof(*css_hdr)), - ICP_QAT_CSS_FWSK_MODULUS_LEN(handle)); + memcpy(virt_addr, image + sizeof(*css_hdr), ICP_QAT_CSS_FWSK_MODULUS_LEN(handle)); /* padding */ memset((void *)(uintptr_t)(virt_addr + ICP_QAT_CSS_FWSK_MODULUS_LEN(handle)), 0, ICP_QAT_CSS_FWSK_PAD_LEN(handle)); /* exponent */ - memcpy((void *)(uintptr_t)(virt_addr + ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + - ICP_QAT_CSS_FWSK_PAD_LEN(handle)), - (void *)(image + sizeof(*css_hdr) + - ICP_QAT_CSS_FWSK_MODULUS_LEN(handle)), - sizeof(unsigned int)); + memcpy(virt_addr + ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + + ICP_QAT_CSS_FWSK_PAD_LEN(handle), image + sizeof(*css_hdr) + + ICP_QAT_CSS_FWSK_MODULUS_LEN(handle), sizeof(unsigned int)); /* signature */ bus_addr = ADD_ADDR(auth_desc->fwsk_pub_high, auth_desc->fwsk_pub_low) + ICP_QAT_CSS_FWSK_PUB_LEN(handle); virt_addr = virt_addr + ICP_QAT_CSS_FWSK_PUB_LEN(handle); - auth_desc->signature_high = (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->signature_low = (unsigned int)bus_addr; + auth_desc->signature_high = upper_32_bits(bus_addr); + auth_desc->signature_low = lower_32_bits(bus_addr); - memcpy((void *)(uintptr_t)virt_addr, - (void *)(image + sizeof(*css_hdr) + - ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + - ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle)), - ICP_QAT_CSS_SIGNATURE_LEN(handle)); + memcpy(virt_addr, image + sizeof(*css_hdr) + ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + + ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle), ICP_QAT_CSS_SIGNATURE_LEN(handle)); bus_addr = ADD_ADDR(auth_desc->signature_high, auth_desc->signature_low) + ICP_QAT_CSS_SIGNATURE_LEN(handle); virt_addr += ICP_QAT_CSS_SIGNATURE_LEN(handle); - auth_desc->img_high = (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->img_low = (unsigned int)bus_addr; - auth_desc->img_len = size - ICP_QAT_AE_IMG_OFFSET(handle); - if (bus_addr + auth_desc->img_len > img_desc.dram_bus_addr + - ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN) { + auth_desc->img_high = upper_32_bits(bus_addr); + auth_desc->img_low = lower_32_bits(bus_addr); + auth_desc->img_len = size - qat_uclo_simg_hdr2sign_len(handle); + if (bus_addr + auth_desc->img_len > + dram_desc->dram_bus_addr + ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN) { pr_err("insufficient memory size for authentication data\n"); - qat_uclo_simg_free(handle, &img_desc); + qat_uclo_simg_free(handle, dram_desc); return -ENOMEM; } - memcpy((void *)(uintptr_t)virt_addr, - (void *)(image + ICP_QAT_AE_IMG_OFFSET(handle)), - auth_desc->img_len); + memcpy(virt_addr, image + qat_uclo_simg_hdr2sign_len(handle), auth_desc->img_len); virt_addr = virt_base; /* AE firmware */ - if (((struct icp_qat_css_hdr *)(uintptr_t)virt_addr)->fw_type == - CSS_AE_FIRMWARE) { + if (fw_type == CSS_AE_FIRMWARE) { auth_desc->img_ae_mode_data_high = auth_desc->img_high; auth_desc->img_ae_mode_data_low = auth_desc->img_low; bus_addr = ADD_ADDR(auth_desc->img_ae_mode_data_high, auth_desc->img_ae_mode_data_low) + sizeof(struct icp_qat_simg_ae_mode); - auth_desc->img_ae_init_data_high = - (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->img_ae_init_data_low = (unsigned int)bus_addr; + auth_desc->img_ae_init_data_high = upper_32_bits(bus_addr); + auth_desc->img_ae_init_data_low = lower_32_bits(bus_addr); bus_addr += ICP_QAT_SIMG_AE_INIT_SEQ_LEN; - auth_desc->img_ae_insts_high = - (unsigned int)(bus_addr >> BITS_PER_TYPE(u32)); - auth_desc->img_ae_insts_low = (unsigned int)bus_addr; + auth_desc->img_ae_insts_high = upper_32_bits(bus_addr); + auth_desc->img_ae_insts_low = lower_32_bits(bus_addr); virt_addr += sizeof(struct icp_qat_css_hdr); virt_addr += ICP_QAT_CSS_FWSK_PUB_LEN(handle); virt_addr += ICP_QAT_CSS_SIGNATURE_LEN(handle); @@ -1523,6 +1515,28 @@ static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, return 0; } +static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, + char *image, unsigned int size, + struct icp_qat_fw_auth_desc **desc) +{ + struct icp_qat_auth_chunk *auth_chunk; + struct icp_firml_dram_desc img_desc; + unsigned int simg_fw_type; + int ret; + + ret = qat_uclo_simg_alloc(handle, &img_desc, ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN); + if (ret) + return ret; + + simg_fw_type = qat_uclo_simg_fw_type(handle, image); + auth_chunk = img_desc.dram_base_addr_v; + auth_chunk->chunk_size = img_desc.dram_size; + auth_chunk->chunk_bus_addr = img_desc.dram_bus_addr; + + return qat_uclo_build_auth_desc_RSA(handle, image, size, &img_desc, + simg_fw_type, desc); +} + static int qat_uclo_load_fw(struct icp_qat_fw_loader_handle *handle, struct icp_qat_fw_auth_desc *desc) { From patchwork Wed Apr 30 11:34:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886557 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EF7825B1D2 for ; Wed, 30 Apr 2025 11:35:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012914; cv=none; b=W4Mg2gSKPTzk/hiRqmveNFoIakOYEclgRTOigKL6ZxnPcSl6W48OSwLzgl/dgl4G8j8pKdaOR1LP5u13BsomO3a1H5ZXgdrSj++VXc51EA8cZ8tf8ounkVykglvEoo5O3S+kutDeQ1nDS3cYQYv+etN2sqvFmioM7qN++8iers4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012914; c=relaxed/simple; bh=6ndKKDy0odFX5NtVdYKV6Lab+zc5aNXaVMracC0hGM4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H6K4iKKPqBY474fRLfUGH8EbnlYUFNhTP9XUeCzlY0VJRsJBrx0XNWsPof7oU05QZGS0k8Y1h2XH+dFy8F4gdeCyY7iJ67IehhV9VWkLaPVGnCJzdYLEaug2JkxK3dvD0/YC0YT8chGXEg6Rg/FK8YXBZfMu4hhZ7Ze/pV6OSCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RlQ/XxdI; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RlQ/XxdI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012913; x=1777548913; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6ndKKDy0odFX5NtVdYKV6Lab+zc5aNXaVMracC0hGM4=; b=RlQ/XxdIW3caNod0D3GEcoJhH9GErmXnfhz9M65sT1FljNeuHVbrTFsM jUQ5AK2XDCyJLxFFdN56R9AwiLvZeVlf/hNojFVrSkM2Dw8aJYxn9auJl NGsJyo0HyvRsDTI3Mcc3/HtXhxTtBgoQPTpOxcrc5xblPhOofK7U2Az8o 8JdGjBMyw4boEhY+OHGf6fcCWfRcV85XqwpY7ASvRrgqY5B45SIyRB/Dy xIQFXjBiPFCfJ7APIWuW1iABsfNEt1wSztnjKJ8H0S3vwcbd3oBfNQU1t yMd1jgIHv1RUC09Yp1WDxJLhW/xIH5VphIUMvWNNPbgrzwAeW4XDd+WV/ w==; X-CSE-ConnectionGUID: Kopseo8ZR+26PSR8GW7W4g== X-CSE-MsgGUID: 2sgIERTCQOmSt7qUJ9oXhQ== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331150" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331150" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:12 -0700 X-CSE-ConnectionGUID: QOfm1KweS1CW5Ta838kZuA== X-CSE-MsgGUID: dq2cEDvBQNapL52IEXjO6w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812536" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:10 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 05/11] crypto: qat - add GEN6 firmware loader Date: Wed, 30 Apr 2025 12:34:47 +0100 Message-Id: <20250430113453.1587497-6-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jack Xu Add support for the QAT GEN6 devices in the firmware loader. This includes handling firmware images signed with the RSA 3K and the XMSS algorithms. Co-developed-by: Suman Kumar Chakraborty Signed-off-by: Suman Kumar Chakraborty Signed-off-by: Jack Xu --- .../intel/qat/qat_common/adf_accel_devices.h | 2 + .../qat/qat_common/icp_qat_fw_loader_handle.h | 1 + .../intel/qat/qat_common/icp_qat_uclo.h | 23 +++ drivers/crypto/intel/qat/qat_common/qat_hal.c | 3 + .../crypto/intel/qat/qat_common/qat_uclo.c | 154 +++++++++++++++++- 5 files changed, 176 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h index a39f506322f6..ed8b85360573 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h @@ -34,6 +34,8 @@ #define PCI_DEVICE_ID_INTEL_QAT_402XXIOV 0x4945 #define PCI_DEVICE_ID_INTEL_QAT_420XX 0x4946 #define PCI_DEVICE_ID_INTEL_QAT_420XXIOV 0x4947 +#define PCI_DEVICE_ID_INTEL_QAT_6XXX 0x4948 + #define ADF_DEVICE_FUSECTL_OFFSET 0x40 #define ADF_DEVICE_LEGFUSE_OFFSET 0x4C #define ADF_DEVICE_FUSECTL_MASK 0x80000000 diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_fw_loader_handle.h b/drivers/crypto/intel/qat/qat_common/icp_qat_fw_loader_handle.h index 7eb5daef4f88..6887930c7995 100644 --- a/drivers/crypto/intel/qat/qat_common/icp_qat_fw_loader_handle.h +++ b/drivers/crypto/intel/qat/qat_common/icp_qat_fw_loader_handle.h @@ -35,6 +35,7 @@ struct icp_qat_fw_loader_chip_info { u32 wakeup_event_val; bool fw_auth; bool css_3k; + bool dual_sign; bool tgroup_share_ustore; u32 fcu_ctl_csr; u32 fcu_sts_csr; diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_uclo.h b/drivers/crypto/intel/qat/qat_common/icp_qat_uclo.h index 1c7bcd8e4055..6313c35eff0c 100644 --- a/drivers/crypto/intel/qat/qat_common/icp_qat_uclo.h +++ b/drivers/crypto/intel/qat/qat_common/icp_qat_uclo.h @@ -7,6 +7,7 @@ #define ICP_QAT_AC_C62X_DEV_TYPE 0x01000000 #define ICP_QAT_AC_C3XXX_DEV_TYPE 0x02000000 #define ICP_QAT_AC_4XXX_A_DEV_TYPE 0x08000000 +#define ICP_QAT_AC_6XXX_DEV_TYPE 0x80000000 #define ICP_QAT_UCLO_MAX_AE 17 #define ICP_QAT_UCLO_MAX_CTX 8 #define ICP_QAT_UCLO_MAX_UIMAGE (ICP_QAT_UCLO_MAX_AE * ICP_QAT_UCLO_MAX_CTX) @@ -81,6 +82,21 @@ #define ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN 0x40000 #define ICP_QAT_CSS_RSA3K_MAX_IMAGE_LEN 0x30000 +/* All lengths below are in bytes */ +#define ICP_QAT_DUALSIGN_OPAQUE_HDR_LEN 12 +#define ICP_QAT_DUALSIGN_OPAQUE_HDR_ALIGN_LEN 16 +#define ICP_QAT_DUALSIGN_OPAQUE_DATA_LEN 3540 +#define ICP_QAT_DUALSIGN_XMSS_PUBKEY_LEN 64 +#define ICP_QAT_DUALSIGN_XMSS_SIG_LEN 2692 +#define ICP_QAT_DUALSIGN_XMSS_SIG_ALIGN_LEN 2696 +#define ICP_QAT_DUALSIGN_MISC_INFO_LEN 16 +#define ICP_QAT_DUALSIGN_FW_TYPE_LEN 7 +#define ICP_QAT_DUALSIGN_MODULE_TYPE 0x14 +#define ICP_QAT_DUALSIGN_HDR_LEN 0x375 +#define ICP_QAT_DUALSIGN_HDR_VER 0x40001 +#define ICP_QAT_DUALSIGN_HDR_LEN_OFFSET 4 +#define ICP_QAT_DUALSIGN_HDR_VER_OFFSET 8 + #define ICP_QAT_CTX_MODE(ae_mode) ((ae_mode) & 0xf) #define ICP_QAT_NN_MODE(ae_mode) (((ae_mode) >> 0x4) & 0xf) #define ICP_QAT_SHARED_USTORE_MODE(ae_mode) (((ae_mode) >> 0xb) & 0x1) @@ -440,6 +456,13 @@ struct icp_qat_fw_auth_desc { unsigned int img_ae_init_data_low; unsigned int img_ae_insts_high; unsigned int img_ae_insts_low; + unsigned int cpp_mask; + unsigned int reserved; + unsigned int xmss_pubkey_high; + unsigned int xmss_pubkey_low; + unsigned int xmss_sig_high; + unsigned int xmss_sig_low; + unsigned int reserved2[2]; }; struct icp_qat_auth_chunk { diff --git a/drivers/crypto/intel/qat/qat_common/qat_hal.c b/drivers/crypto/intel/qat/qat_common/qat_hal.c index 841c1d7d3ffe..da4eca6e1633 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_hal.c +++ b/drivers/crypto/intel/qat/qat_common/qat_hal.c @@ -698,6 +698,7 @@ static int qat_hal_chip_init(struct icp_qat_fw_loader_handle *handle, case PCI_DEVICE_ID_INTEL_QAT_401XX: case PCI_DEVICE_ID_INTEL_QAT_402XX: case PCI_DEVICE_ID_INTEL_QAT_420XX: + case PCI_DEVICE_ID_INTEL_QAT_6XXX: handle->chip_info->mmp_sram_size = 0; handle->chip_info->nn = false; handle->chip_info->lm2lm3 = true; @@ -712,6 +713,8 @@ static int qat_hal_chip_init(struct icp_qat_fw_loader_handle *handle, handle->chip_info->wakeup_event_val = 0x80000000; handle->chip_info->fw_auth = true; handle->chip_info->css_3k = true; + if (handle->pci_dev->device == PCI_DEVICE_ID_INTEL_QAT_6XXX) + handle->chip_info->dual_sign = true; handle->chip_info->tgroup_share_ustore = true; handle->chip_info->fcu_ctl_csr = FCU_CONTROL_4XXX; handle->chip_info->fcu_sts_csr = FCU_STATUS_4XXX; diff --git a/drivers/crypto/intel/qat/qat_common/qat_uclo.c b/drivers/crypto/intel/qat/qat_common/qat_uclo.c index d7f2ceb81f1f..21d652a1c8ef 100644 --- a/drivers/crypto/intel/qat/qat_common/qat_uclo.c +++ b/drivers/crypto/intel/qat/qat_common/qat_uclo.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "adf_accel_devices.h" #include "adf_common_drv.h" #include "icp_qat_uclo.h" @@ -737,6 +738,8 @@ qat_uclo_get_dev_type(struct icp_qat_fw_loader_handle *handle) case PCI_DEVICE_ID_INTEL_QAT_402XX: case PCI_DEVICE_ID_INTEL_QAT_420XX: return ICP_QAT_AC_4XXX_A_DEV_TYPE; + case PCI_DEVICE_ID_INTEL_QAT_6XXX: + return ICP_QAT_AC_6XXX_DEV_TYPE; default: pr_err("unsupported device 0x%x\n", handle->pci_dev->device); return 0; @@ -1035,17 +1038,30 @@ static int qat_uclo_parse_uof_obj(struct icp_qat_fw_loader_handle *handle) static unsigned int qat_uclo_simg_hdr2sign_len(struct icp_qat_fw_loader_handle *handle) { + if (handle->chip_info->dual_sign) + return ICP_QAT_DUALSIGN_OPAQUE_DATA_LEN; + return ICP_QAT_AE_IMG_OFFSET(handle); } static unsigned int qat_uclo_simg_hdr2cont_len(struct icp_qat_fw_loader_handle *handle) { + if (handle->chip_info->dual_sign) + return ICP_QAT_DUALSIGN_OPAQUE_DATA_LEN + ICP_QAT_DUALSIGN_MISC_INFO_LEN; + return ICP_QAT_AE_IMG_OFFSET(handle); } static unsigned int qat_uclo_simg_fw_type(struct icp_qat_fw_loader_handle *handle, void *img_ptr) { struct icp_qat_css_hdr *hdr = img_ptr; + char *fw_hdr = img_ptr; + unsigned int offset; + + if (handle->chip_info->dual_sign) { + offset = qat_uclo_simg_hdr2sign_len(handle) + ICP_QAT_DUALSIGN_FW_TYPE_LEN; + return *(fw_hdr + offset); + } return hdr->fw_type; } @@ -1390,16 +1406,27 @@ static int qat_uclo_check_image(struct icp_qat_fw_loader_handle *handle, if (handle->chip_info->fw_auth) { header_len = qat_uclo_simg_hdr2sign_len(handle); simg_type = qat_uclo_simg_fw_type(handle, image); - css_hdr = image; - if ((css_hdr->header_len * css_dword_size) != header_len) - goto err; - if ((css_hdr->size * css_dword_size) != size) - goto err; + + if (handle->chip_info->dual_sign) { + if (css_hdr->module_type != ICP_QAT_DUALSIGN_MODULE_TYPE) + goto err; + if (css_hdr->header_len != ICP_QAT_DUALSIGN_HDR_LEN) + goto err; + if (css_hdr->header_ver != ICP_QAT_DUALSIGN_HDR_VER) + goto err; + } else { + if (css_hdr->header_len * css_dword_size != header_len) + goto err; + if (css_hdr->size * css_dword_size != size) + goto err; + if (size <= header_len) + goto err; + } + if (fw_type != simg_type) goto err; - if (size <= header_len) - goto err; + size -= header_len; } @@ -1515,6 +1542,115 @@ static int qat_uclo_build_auth_desc_RSA(struct icp_qat_fw_loader_handle *handle, return 0; } +static int qat_uclo_build_auth_desc_dualsign(struct icp_qat_fw_loader_handle *handle, + char *image, unsigned int size, + struct icp_firml_dram_desc *dram_desc, + unsigned int fw_type, + struct icp_qat_fw_auth_desc **desc) +{ + struct icp_qat_simg_ae_mode *simg_ae_mode; + struct icp_qat_fw_auth_desc *auth_desc; + unsigned int chunk_offset, img_offset; + u64 bus_addr, addr; + char *virt_addr; + + virt_addr = dram_desc->dram_base_addr_v; + virt_addr += sizeof(struct icp_qat_auth_chunk); + bus_addr = dram_desc->dram_bus_addr + sizeof(struct icp_qat_auth_chunk); + + auth_desc = dram_desc->dram_base_addr_v; + auth_desc->img_len = size - qat_uclo_simg_hdr2sign_len(handle); + auth_desc->css_hdr_high = upper_32_bits(bus_addr); + auth_desc->css_hdr_low = lower_32_bits(bus_addr); + memcpy(virt_addr, image, ICP_QAT_DUALSIGN_OPAQUE_HDR_LEN); + + img_offset = ICP_QAT_DUALSIGN_OPAQUE_HDR_LEN; + chunk_offset = ICP_QAT_DUALSIGN_OPAQUE_HDR_ALIGN_LEN; + + /* RSA pub key */ + addr = bus_addr + chunk_offset; + auth_desc->fwsk_pub_high = upper_32_bits(addr); + auth_desc->fwsk_pub_low = lower_32_bits(addr); + memcpy(virt_addr + chunk_offset, image + img_offset, ICP_QAT_CSS_FWSK_MODULUS_LEN(handle)); + + img_offset += ICP_QAT_CSS_FWSK_MODULUS_LEN(handle); + chunk_offset += ICP_QAT_CSS_FWSK_MODULUS_LEN(handle); + /* RSA padding */ + memset(virt_addr + chunk_offset, 0, ICP_QAT_CSS_FWSK_PAD_LEN(handle)); + + chunk_offset += ICP_QAT_CSS_FWSK_PAD_LEN(handle); + /* RSA exponent */ + memcpy(virt_addr + chunk_offset, image + img_offset, ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle)); + + img_offset += ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle); + chunk_offset += ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle); + /* RSA signature */ + addr = bus_addr + chunk_offset; + auth_desc->signature_high = upper_32_bits(addr); + auth_desc->signature_low = lower_32_bits(addr); + memcpy(virt_addr + chunk_offset, image + img_offset, ICP_QAT_CSS_SIGNATURE_LEN(handle)); + + img_offset += ICP_QAT_CSS_SIGNATURE_LEN(handle); + chunk_offset += ICP_QAT_CSS_SIGNATURE_LEN(handle); + /* XMSS pubkey */ + addr = bus_addr + chunk_offset; + auth_desc->xmss_pubkey_high = upper_32_bits(addr); + auth_desc->xmss_pubkey_low = lower_32_bits(addr); + memcpy(virt_addr + chunk_offset, image + img_offset, ICP_QAT_DUALSIGN_XMSS_PUBKEY_LEN); + + img_offset += ICP_QAT_DUALSIGN_XMSS_PUBKEY_LEN; + chunk_offset += ICP_QAT_DUALSIGN_XMSS_PUBKEY_LEN; + /* XMSS signature */ + addr = bus_addr + chunk_offset; + auth_desc->xmss_sig_high = upper_32_bits(addr); + auth_desc->xmss_sig_low = lower_32_bits(addr); + memcpy(virt_addr + chunk_offset, image + img_offset, ICP_QAT_DUALSIGN_XMSS_SIG_LEN); + + img_offset += ICP_QAT_DUALSIGN_XMSS_SIG_LEN; + chunk_offset += ICP_QAT_DUALSIGN_XMSS_SIG_ALIGN_LEN; + + if (dram_desc->dram_size < (chunk_offset + auth_desc->img_len)) { + pr_err("auth chunk memory size is not enough to store data\n"); + return -ENOMEM; + } + + /* Signed data */ + addr = bus_addr + chunk_offset; + auth_desc->img_high = upper_32_bits(addr); + auth_desc->img_low = lower_32_bits(addr); + memcpy(virt_addr + chunk_offset, image + img_offset, auth_desc->img_len); + + chunk_offset += ICP_QAT_DUALSIGN_MISC_INFO_LEN; + /* AE firmware */ + if (fw_type == CSS_AE_FIRMWARE) { + /* AE mode data */ + addr = bus_addr + chunk_offset; + auth_desc->img_ae_mode_data_high = upper_32_bits(addr); + auth_desc->img_ae_mode_data_low = lower_32_bits(addr); + simg_ae_mode = + (struct icp_qat_simg_ae_mode *)(virt_addr + chunk_offset); + auth_desc->ae_mask = simg_ae_mode->ae_mask & handle->cfg_ae_mask; + + chunk_offset += sizeof(struct icp_qat_simg_ae_mode); + /* AE init seq */ + addr = bus_addr + chunk_offset; + auth_desc->img_ae_init_data_high = upper_32_bits(addr); + auth_desc->img_ae_init_data_low = lower_32_bits(addr); + + chunk_offset += ICP_QAT_SIMG_AE_INIT_SEQ_LEN; + /* AE instructions */ + addr = bus_addr + chunk_offset; + auth_desc->img_ae_insts_high = upper_32_bits(addr); + auth_desc->img_ae_insts_low = lower_32_bits(addr); + } else { + addr = bus_addr + chunk_offset; + auth_desc->img_ae_insts_high = upper_32_bits(addr); + auth_desc->img_ae_insts_low = lower_32_bits(addr); + } + *desc = auth_desc; + return 0; +} + static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, char *image, unsigned int size, struct icp_qat_fw_auth_desc **desc) @@ -1533,6 +1669,10 @@ static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, auth_chunk->chunk_size = img_desc.dram_size; auth_chunk->chunk_bus_addr = img_desc.dram_bus_addr; + if (handle->chip_info->dual_sign) + return qat_uclo_build_auth_desc_dualsign(handle, image, size, &img_desc, + simg_fw_type, desc); + return qat_uclo_build_auth_desc_RSA(handle, image, size, &img_desc, simg_fw_type, desc); } From patchwork Wed Apr 30 11:34:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886178 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 238A825B1E8 for ; Wed, 30 Apr 2025 11:35:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012915; cv=none; b=LwSsOXiFKJgmUfZsngNRBNucVgrVw1MvKlCpPv9d8M2D3QyljBavd1bqceBSltb6XGMdxBVKv4f18SSpJRcKPr/C/o/l1ROlpBPEZEnNOUjebqc9A+V69R9QYtOZSpfHjrr5Kd0AZe+9JhZz66pI1CEOzHQp4o/W30ft2zBnGnc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012915; c=relaxed/simple; bh=E1txipOVkEw8OVXhi7SL+BN/OV+6jxipDQzCBnuSkVw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ucML5Ox3j5r1NcoClU85/10GYOhbwVpJL0qp5IRziyMnEIrnj8vZkRCdCbVsiu13nQSiPXQKcuD66SbS46lj9YBsCBtDH/pe49aHivzmisED/S0diT+zz+5s1jNNQnIBmHpED2aM+cOd7ImTnlVlMOgfLLupzB9CVYaJBhIoHak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HTPZFM21; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HTPZFM21" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012914; x=1777548914; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E1txipOVkEw8OVXhi7SL+BN/OV+6jxipDQzCBnuSkVw=; b=HTPZFM21PINNsUzgh48YFT6nENZYjMOjOB6N1YBJGCkhrpLaKGyHe211 RnQBoJ9X81nkH1wVWD5Ket2nrNATCTvhsK1oDPoVJWXrxF0EL7i+6Q/pg ErbyYa+MCuzgb2UYvF1h1uo0qqIdLHG+AGXU7GD2XY7nWJRyu4VmOYhAj 5aZSan11ZQmU4aoA9rOlxldt8w2YAd40T051zguOFDmULMRE3P4Liaw+j Y5NxCxc7iYwSLCdnGBNVMssOkGDMNIPhsq+9LWxjKNEbuj17c7oyWN3zg QWb3AK4lXrg61V2b5zCwfKl3YuvsNZsLawE/ceFID+QFbcMQ2lrFSq8x0 A==; X-CSE-ConnectionGUID: 1c/KiWhzS52iSqST0q18vw== X-CSE-MsgGUID: nG4ynfpESfeC4irutYp2Yg== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331154" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331154" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:14 -0700 X-CSE-ConnectionGUID: r/8xUQW0SYOd1iJvhImh0g== X-CSE-MsgGUID: Ccpt4kbPQSu4xsXJLn6ruQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812539" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:12 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 06/11] crypto: qat - export adf_get_service_mask() Date: Wed, 30 Apr 2025 12:34:48 +0100 Message-Id: <20250430113453.1587497-7-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Giovanni Cabiddu Export the function adf_get_service_mask() as it will be used by the qat_6xxx driver to configure the device. Signed-off-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/intel/qat/qat_common/adf_cfg_services.c | 3 ++- drivers/crypto/intel/qat/qat_common/adf_cfg_services.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c index 30abcd9e1283..c39871291da7 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c +++ b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c @@ -116,7 +116,7 @@ int adf_parse_service_string(struct adf_accel_dev *accel_dev, const char *in, return adf_service_mask_to_string(mask, out, out_len); } -static int adf_get_service_mask(struct adf_accel_dev *accel_dev, unsigned long *mask) +int adf_get_service_mask(struct adf_accel_dev *accel_dev, unsigned long *mask) { char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = { }; size_t len; @@ -138,6 +138,7 @@ static int adf_get_service_mask(struct adf_accel_dev *accel_dev, unsigned long * return ret; } +EXPORT_SYMBOL_GPL(adf_get_service_mask); int adf_get_service_enabled(struct adf_accel_dev *accel_dev) { diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h index f6bafc15cbc6..3742c450878f 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h +++ b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h @@ -32,5 +32,6 @@ enum { int adf_parse_service_string(struct adf_accel_dev *accel_dev, const char *in, size_t in_len, char *out, size_t out_len); int adf_get_service_enabled(struct adf_accel_dev *accel_dev); +int adf_get_service_mask(struct adf_accel_dev *accel_dev, unsigned long *mask); #endif From patchwork Wed Apr 30 11:34:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886177 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 961AA25B1D3 for ; Wed, 30 Apr 2025 11:35:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012918; cv=none; b=GBenMz9RivDzekr5acwecDpwamsUliWPjezsrD+nAnpSVANy+IJzSk3nSD2vbaacD3iIalLPGwjjelWmr9ao72kdkFPOFejGBLZHBxbT/DSGCeXdfJGxrRj8sly22/XQRNlH37/3Qp3CM/aNB63TTD19cszN1fwx/ZSlNi69oqg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012918; c=relaxed/simple; bh=MDsLmT19NKC6X6jFNhUxgM3IO5pWep9RfqekO5yTV70=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CcDPcsdJhkuWJ+nY5sjyWNwNiR1KyZbZbC4tqW6XeoVMEwkt9IXPjHfLvyTiIiKyQmpVvzmcju6KYcxtopDqEij0ws3jayOiv5DUeXFI/BD9qyYkLlg31pdBGhItEVO5OofU/npm8DrH6y+Iw6clphJaZIXjXbWcTnyCwGNuCmc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Bipq/QmT; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Bipq/QmT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012916; x=1777548916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MDsLmT19NKC6X6jFNhUxgM3IO5pWep9RfqekO5yTV70=; b=Bipq/QmTb+eCdgzy1aG5Er4TxFR9JzmAs/DgZTOdSi/l7rp5cFeoCrhS KcFO0jW49UBCWxwiepthB1WTcaUZaSKpXDYrxQzaXjOlw+3+Yfh3QXG4z a6ddNOt6ZZPWZkWYcrcKRHCkcK8R4rzfX48hrdmcKiytzIjw60/3EflIj 0iGd6jF51BdENi2tcWjsOVGsK7zqikin/LW82g56rx/17/KGqXgeFPhmA h14gRIVak5itl+dqK32ZNSTtV+iq1Jr5er6a7mqenWJLt01JKAtC2V7aF YP/hiFsVPAQ0srk/h236LYLPKpNF2KANDGj9XrzellaaLRm8C/Qbi5g54 w==; X-CSE-ConnectionGUID: 1hRVoiBYTYaTUC/AJbtDnA== X-CSE-MsgGUID: T7lOtRl1R/6ZcjhS/mUQEw== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331157" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331157" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:15 -0700 X-CSE-ConnectionGUID: zAHhSTK/Sdy4X9HJmRYjcg== X-CSE-MsgGUID: 2PLN3t7oR0+6wXUTZPTtfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812544" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:14 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 07/11] crypto: qat - expose configuration functions Date: Wed, 30 Apr 2025 12:34:49 +0100 Message-Id: <20250430113453.1587497-8-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The functions related to compression and crypto configurations were previously declared static, restricting the visibility to the defining source file. Remove the static qualifier, allowing it to be used in other files as needed. This is necessary for sharing this configuration functions with other QAT generations. Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/intel/qat/qat_common/adf_gen4_config.c | 6 +++--- drivers/crypto/intel/qat/qat_common/adf_gen4_config.h | 3 +++ 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_config.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_config.c index f97e7a880f3a..afcdfdd0a37a 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_config.c +++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_config.c @@ -11,7 +11,7 @@ #include "qat_compression.h" #include "qat_crypto.h" -static int adf_crypto_dev_config(struct adf_accel_dev *accel_dev) +int adf_crypto_dev_config(struct adf_accel_dev *accel_dev) { char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; int banks = GET_MAX_BANKS(accel_dev); @@ -117,7 +117,7 @@ static int adf_crypto_dev_config(struct adf_accel_dev *accel_dev) return ret; } -static int adf_comp_dev_config(struct adf_accel_dev *accel_dev) +int adf_comp_dev_config(struct adf_accel_dev *accel_dev) { char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; int banks = GET_MAX_BANKS(accel_dev); @@ -187,7 +187,7 @@ static int adf_comp_dev_config(struct adf_accel_dev *accel_dev) return ret; } -static int adf_no_dev_config(struct adf_accel_dev *accel_dev) +int adf_no_dev_config(struct adf_accel_dev *accel_dev) { unsigned long val; int ret; diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_config.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_config.h index bb87655f69a8..38a674c27e40 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_gen4_config.h +++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_config.h @@ -7,5 +7,8 @@ int adf_gen4_dev_config(struct adf_accel_dev *accel_dev); int adf_gen4_cfg_dev_init(struct adf_accel_dev *accel_dev); +int adf_crypto_dev_config(struct adf_accel_dev *accel_dev); +int adf_comp_dev_config(struct adf_accel_dev *accel_dev); +int adf_no_dev_config(struct adf_accel_dev *accel_dev); #endif From patchwork Wed Apr 30 11:34:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886556 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27C3A246799 for ; Wed, 30 Apr 2025 11:35:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012918; cv=none; b=n9BZJi8X6ZAlOxaeQdGL/6sHANkso048UchyqXIQD/GdzMWN4uzj5+IT8laBZNAx+7aR+VdOcFsdpatqjpviUwOe9W4r6/Aa5m5jZJ7Z0QLhCLbWATbf6+zbxHbzikbm2AwqPesOnMYakT0+x78BuRnBiZArHTUPDG6A5oQLxj0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012918; c=relaxed/simple; bh=wQd4Wo6aGykoa0ObO1XiM7MHW833x9izyDBEZxVGSRk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pM1pm9QGhxREWBZr5fpPOwRcm7OmijnqN4idMdWGaLdap1X0C5B5B9aKdaWn2b7pVvZBGm1yYQETYKnXUjPW6pJQei++IfvfsvV0S2GhZFW/qdHfCOQtoWkSDR1HPQg8SPd5eD8ykcuyUJYbiSs/F4pLbBJNwK+coMeSHalL62o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SDFiS30/; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SDFiS30/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012917; x=1777548917; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wQd4Wo6aGykoa0ObO1XiM7MHW833x9izyDBEZxVGSRk=; b=SDFiS30/0aPPejMXkSE/4k6KPjIvu+rebpnFpHY72QyGkoeDQi3fqS/9 6wr54r8fyqu2VLDsl10fWRPIjaXnj2HROZFuHoLavLR/rXH5ysKSD8pUf 8fvNOwDl9ezJyNmsjMu5tlhMKraxm2B1WgvblTlaeJ3omGva2PAnK37UV xCccngsKNR/78AzbFJK44EV7ozGvd0TsW8YEt2/NeSEaydHs0rJgqRgl7 GHpuf59/3b5DJB8+WOUA8FLqsuKQsh4xU5dN71W28berilYzMNQZOVZ8H oNkGjpHc8IVpOql2AiQey9Wkkk30Of38//ytb4wUyreIACFZgTMctL8x9 A==; X-CSE-ConnectionGUID: p3pEOKmFQv+yvUjr1hzRHw== X-CSE-MsgGUID: Xo7XiT1ATzy82mdJsaMq9w== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331160" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331160" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:17 -0700 X-CSE-ConnectionGUID: d3KsSuDzSCieQE2BCwqr8A== X-CSE-MsgGUID: HGkKXLJXQqGfqXjCuLGMig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812551" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:15 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 08/11] crypto: qat - export adf_init_admin_pm() Date: Wed, 30 Apr 2025 12:34:50 +0100 Message-Id: <20250430113453.1587497-9-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Export the function adf_init_admin_pm() as it will be used by the qat_6xxx driver to send the power management initialization messages to the firmware. Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/intel/qat/qat_common/adf_admin.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/intel/qat/qat_common/adf_admin.c b/drivers/crypto/intel/qat/qat_common/adf_admin.c index acad526eb741..573388c37100 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_admin.c +++ b/drivers/crypto/intel/qat/qat_common/adf_admin.c @@ -449,6 +449,7 @@ int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay) return adf_send_admin(accel_dev, &req, &resp, ae_mask); } +EXPORT_SYMBOL_GPL(adf_init_admin_pm); int adf_get_pm_info(struct adf_accel_dev *accel_dev, dma_addr_t p_state_addr, size_t buff_size) From patchwork Wed Apr 30 11:34:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886555 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B59225B665 for ; Wed, 30 Apr 2025 11:35:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012921; cv=none; b=trOLNKeEejG7eJUPc3KvAgc62ChdXR70L02ZScAgeE9H9BcnDdwVb2ry9oMuj7fN95NpDauaSRJPisYaH/97w+hzme/bB6W85UMeJ/ox8bFVRPf22DOR8I3TXsy6dUdrJKmRhEJxUocAygYvBoyKV5XaOQt+wISE7Rs9GHbrmTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012921; c=relaxed/simple; bh=17mAPsiBKbtl0RgTOIX9RXIqBH6nOcp4OdP7FDyDmQs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ukui2O7NGnsH71CFMzkZL8AtEJgzKIBamu6oTcLzF8RUBEybkdp4aF6HlXDdV2WynpZMnVURmlXcpeRwQGRlbYdeX/uBAXNx0wV54JYvP3FemzgQMlk7FAFR3cb9gxYV2GNK28ctLqLshIhd51X+s89cCh8N9BmGwXY1e70VLYo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nAmydsvj; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nAmydsvj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012919; x=1777548919; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=17mAPsiBKbtl0RgTOIX9RXIqBH6nOcp4OdP7FDyDmQs=; b=nAmydsvjqikdrf5/0hkycb0a8ioT+9WnhCDkD2jeslFRxfiLGiRqTNvU d3TS5akX9GPOyaoLFAQ1Y/mLvABEbRSNaD0Q2WZfQjBvFKEppHFyHnD07 DlQJI1uJ2eA3ZNRmOfkkB4jthqV9tw80DcWysrzANSsYufblhZ2PelOe0 MznwwIavOPLFLvuXq2p9XYAI1JOZi8bnHf3HLpgIi0fmwa5WKvT+ZH5pX iq233f5AZeqdqFZlkHLdi0c+3wHZNYV9J8YroWZLqQyKh7NbwDoFFp4h0 J08Xe+DpkB2azDfO4fxOy3wxoYe+u0UsVtyA1aRlVbYL5/kNJNBCWJzAh w==; X-CSE-ConnectionGUID: 6bYeg3G9ThmtMIel+u5bGg== X-CSE-MsgGUID: r5owAAbFQmC0nROYkN4KVg== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331162" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331162" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:19 -0700 X-CSE-ConnectionGUID: gi9UmMr1RjCO6aFupREO1w== X-CSE-MsgGUID: IoXKwf0VRgiOtnxpvwyHVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812555" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:17 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 09/11] crypto: qat - update firmware api Date: Wed, 30 Apr 2025 12:34:51 +0100 Message-Id: <20250430113453.1587497-10-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Update the firmware API to have partial decomp as an argument. Modify the firmware descriptor to support auto-select best and partial decompress. Define the maximal auto-select best value. Define the mask and bit position for the partial decompress field in the firmware descriptor. Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- drivers/crypto/intel/qat/qat_common/adf_dc.c | 3 ++- .../intel/qat/qat_common/icp_qat_fw_comp.h | 23 ++++++++++++++++--- 2 files changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/intel/qat/qat_common/adf_dc.c b/drivers/crypto/intel/qat/qat_common/adf_dc.c index 4beb4b7dbf0e..3e8fb4e3ed97 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_dc.c +++ b/drivers/crypto/intel/qat/qat_common/adf_dc.c @@ -46,7 +46,8 @@ int qat_comp_build_ctx(struct adf_accel_dev *accel_dev, void *ctx, enum adf_dc_a ICP_QAT_FW_COMP_NO_XXHASH_ACC, ICP_QAT_FW_COMP_CNV_ERROR_NONE, ICP_QAT_FW_COMP_NO_APPEND_CRC, - ICP_QAT_FW_COMP_NO_DROP_DATA); + ICP_QAT_FW_COMP_NO_DROP_DATA, + ICP_QAT_FW_COMP_NO_PARTIAL_DECOMPRESS); ICP_QAT_FW_COMN_NEXT_ID_SET(comp_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); ICP_QAT_FW_COMN_CURR_ID_SET(comp_cd_ctrl, ICP_QAT_FW_SLICE_COMP); diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_fw_comp.h b/drivers/crypto/intel/qat/qat_common/icp_qat_fw_comp.h index 04f645957e28..81969c515a17 100644 --- a/drivers/crypto/intel/qat/qat_common/icp_qat_fw_comp.h +++ b/drivers/crypto/intel/qat/qat_common/icp_qat_fw_comp.h @@ -44,6 +44,7 @@ enum icp_qat_fw_comp_20_cmd_id { #define ICP_QAT_FW_COMP_RET_DISABLE_TYPE0_HEADER_DATA_MASK 0x1 #define ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_BITPOS 7 #define ICP_QAT_FW_COMP_DISABLE_SECURE_RAM_AS_INTMD_BUF_MASK 0x1 +#define ICP_QAT_FW_COMP_AUTO_SELECT_BEST_MAX_VALUE 0xFFFFFFFF #define ICP_QAT_FW_COMP_FLAGS_BUILD(sesstype, autoselect, enhanced_asb, \ ret_uncomp, secure_ram) \ @@ -117,7 +118,7 @@ struct icp_qat_fw_comp_req_params { #define ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(sop, eop, bfinal, cnv, cnvnr, \ cnvdfx, crc, xxhash_acc, \ cnv_error_type, append_crc, \ - drop_data) \ + drop_data, partial_decomp) \ ((((sop) & ICP_QAT_FW_COMP_SOP_MASK) << \ ICP_QAT_FW_COMP_SOP_BITPOS) | \ (((eop) & ICP_QAT_FW_COMP_EOP_MASK) << \ @@ -139,7 +140,9 @@ struct icp_qat_fw_comp_req_params { (((append_crc) & ICP_QAT_FW_COMP_APPEND_CRC_MASK) \ << ICP_QAT_FW_COMP_APPEND_CRC_BITPOS) | \ (((drop_data) & ICP_QAT_FW_COMP_DROP_DATA_MASK) \ - << ICP_QAT_FW_COMP_DROP_DATA_BITPOS)) + << ICP_QAT_FW_COMP_DROP_DATA_BITPOS) | \ + (((partial_decomp) & ICP_QAT_FW_COMP_PARTIAL_DECOMP_MASK) \ + << ICP_QAT_FW_COMP_PARTIAL_DECOMP_BITPOS)) #define ICP_QAT_FW_COMP_NOT_SOP 0 #define ICP_QAT_FW_COMP_SOP 1 @@ -161,6 +164,8 @@ struct icp_qat_fw_comp_req_params { #define ICP_QAT_FW_COMP_NO_APPEND_CRC 0 #define ICP_QAT_FW_COMP_DROP_DATA 1 #define ICP_QAT_FW_COMP_NO_DROP_DATA 0 +#define ICP_QAT_FW_COMP_PARTIAL_DECOMPRESS 1 +#define ICP_QAT_FW_COMP_NO_PARTIAL_DECOMPRESS 0 #define ICP_QAT_FW_COMP_SOP_BITPOS 0 #define ICP_QAT_FW_COMP_SOP_MASK 0x1 #define ICP_QAT_FW_COMP_EOP_BITPOS 1 @@ -189,6 +194,8 @@ struct icp_qat_fw_comp_req_params { #define ICP_QAT_FW_COMP_APPEND_CRC_MASK 0x1 #define ICP_QAT_FW_COMP_DROP_DATA_BITPOS 25 #define ICP_QAT_FW_COMP_DROP_DATA_MASK 0x1 +#define ICP_QAT_FW_COMP_PARTIAL_DECOMP_BITPOS 27 +#define ICP_QAT_FW_COMP_PARTIAL_DECOMP_MASK 0x1 #define ICP_QAT_FW_COMP_SOP_GET(flags) \ QAT_FIELD_GET(flags, ICP_QAT_FW_COMP_SOP_BITPOS, \ @@ -281,8 +288,18 @@ struct icp_qat_fw_comp_req { union { struct icp_qat_fw_xlt_req_params xlt_pars; __u32 resrvd1[ICP_QAT_FW_NUM_LONGWORDS_2]; + struct { + __u32 partial_decompress_length; + __u32 partial_decompress_offset; + } partial_decompress; } u1; - __u32 resrvd2[ICP_QAT_FW_NUM_LONGWORDS_2]; + union { + __u32 resrvd2[ICP_QAT_FW_NUM_LONGWORDS_2]; + struct { + __u32 asb_value; + __u32 reserved; + } asb_threshold; + } u3; struct icp_qat_fw_comp_cd_hdr comp_cd_ctrl; union { struct icp_qat_fw_xlt_cd_hdr xlt_cd_ctrl; From patchwork Wed Apr 30 11:34:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886176 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18C492586C1 for ; Wed, 30 Apr 2025 11:35:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012923; cv=none; b=uVU7KgiMExQV8ikYA3pSBHHTgA2U+6Tnt5CIX1OS7dj0otiJAa1T+PNIAovSqBC45Cs/F7bEsZn5hUhUK3RigO1uUCJf94tcxFsFOlIfxulK23cMuZeyZkp8k9DNhqGDz/vsRP8Sk4bOWkJ5nhL3zJzSSZWaDXcvyqIdAnskz9Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012923; c=relaxed/simple; bh=6L7V/scOUMrLvSwy1NoEO+t9Hhqz3wCudBXQWYhCukU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r8l2EUmLopElF4ukt9oXDNFz0IJd4vk2KDzJKf2quyhv6FfHx5HmiNhrX28T2FzIYgDus6Z7zAviHE+LMsziwy9ufGt1JlnmOlyiJ7vrefXy3KDk8qkKLTiA0FE3vhUH6X5PanoRYh4N4gi1xmxop1ctXPH6n64XSUM7cViwUeo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=A+Vy6dky; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="A+Vy6dky" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012921; x=1777548921; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6L7V/scOUMrLvSwy1NoEO+t9Hhqz3wCudBXQWYhCukU=; b=A+Vy6dkyygxMTgj27gYGaylj4sPpGVwXrDRNvMdve3MIWLlHSTWGaQB/ V70lV9oFQvu51Awo+s0F1uK9PUtlDJy3C1H7817JB9FidfvP1BGZqF9q9 x7x/nn19IdjKG0m4Rs+fviU8vAVze0fUgYFbUxU9A9cOTSpK+yIeRX8PO seyXt/+MtV89nG/NlLXdg0slQCL/JtbqzvklyZ/HXjhxXty6t2hMoHJ+Z N9NQv3Zx/Tkczpj9Jpw/BX8VnFnh9SzM/8Q6d9D3eDg7e3sYNaf7S53hs j/eOca+PNja6UhFCsdTxHv4Mf40JKsQYqglF23TCq87fXqyieSQs5CxKh g==; X-CSE-ConnectionGUID: Xgr1ho3yTuCbObipgl4UXg== X-CSE-MsgGUID: Ia3GWXKYQOegdTSp4SFnaw== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331164" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331164" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:21 -0700 X-CSE-ConnectionGUID: Rg55coD8RT2RtBoUsEYyoQ== X-CSE-MsgGUID: yPdyuuWPT+eWuGSKfUz/aw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812559" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:19 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 10/11] crypto: qat - add firmware headers for GEN6 devices Date: Wed, 30 Apr 2025 12:34:52 +0100 Message-Id: <20250430113453.1587497-11-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add firmware headers related to compression that define macros for building the hardware configuration word, along with bitfields related to algorithm settings. Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu Reviewed-by: Andy Shevchenko --- .../intel/qat/qat_common/icp_qat_hw_51_comp.h | 99 ++++++ .../qat/qat_common/icp_qat_hw_51_comp_defs.h | 318 ++++++++++++++++++ 2 files changed, 417 insertions(+) create mode 100644 drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp.h create mode 100644 drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp_defs.h diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp.h b/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp.h new file mode 100644 index 000000000000..dce639152345 --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ICP_QAT_HW_51_COMP_H_ +#define ICP_QAT_HW_51_COMP_H_ + +#include + +#include "icp_qat_fw.h" +#include "icp_qat_hw_51_comp_defs.h" + +struct icp_qat_hw_comp_51_config_csr_lower { + enum icp_qat_hw_comp_51_abd abd; + enum icp_qat_hw_comp_51_lllbd_ctrl lllbd; + enum icp_qat_hw_comp_51_search_depth sd; + enum icp_qat_hw_comp_51_min_match_control mmctrl; + enum icp_qat_hw_comp_51_lz4_block_checksum lbc; +}; + +static inline u32 +ICP_QAT_FW_COMP_51_BUILD_CONFIG_LOWER(struct icp_qat_hw_comp_51_config_csr_lower csr) +{ + u32 val32 = 0; + + QAT_FIELD_SET(val32, csr.abd, + ICP_QAT_HW_COMP_51_CONFIG_CSR_ABD_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_ABD_MASK); + QAT_FIELD_SET(val32, csr.lllbd, + ICP_QAT_HW_COMP_51_CONFIG_CSR_LLLBD_CTRL_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_LLLBD_CTRL_MASK); + QAT_FIELD_SET(val32, csr.sd, + ICP_QAT_HW_COMP_51_CONFIG_CSR_SEARCH_DEPTH_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_SEARCH_DEPTH_MASK); + QAT_FIELD_SET(val32, csr.mmctrl, + ICP_QAT_HW_COMP_51_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_MIN_MATCH_CONTROL_MASK); + QAT_FIELD_SET(val32, csr.lbc, + ICP_QAT_HW_COMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_MASK); + + return val32; +} + +struct icp_qat_hw_comp_51_config_csr_upper { + enum icp_qat_hw_comp_51_dmm_algorithm edmm; + enum icp_qat_hw_comp_51_bms bms; + enum icp_qat_hw_comp_51_scb_mode_reset_mask scb_mode_reset; +}; + +static inline u32 +ICP_QAT_FW_COMP_51_BUILD_CONFIG_UPPER(struct icp_qat_hw_comp_51_config_csr_upper csr) +{ + u32 val32 = 0; + + QAT_FIELD_SET(val32, csr.edmm, + ICP_QAT_HW_COMP_51_CONFIG_CSR_DMM_ALGORITHM_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_DMM_ALGORITHM_MASK); + QAT_FIELD_SET(val32, csr.bms, + ICP_QAT_HW_COMP_51_CONFIG_CSR_BMS_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_BMS_MASK); + QAT_FIELD_SET(val32, csr.scb_mode_reset, + ICP_QAT_HW_COMP_51_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS, + ICP_QAT_HW_COMP_51_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK); + + return val32; +} + +struct icp_qat_hw_decomp_51_config_csr_lower { + enum icp_qat_hw_decomp_51_lz4_block_checksum lbc; +}; + +static inline u32 +ICP_QAT_FW_DECOMP_51_BUILD_CONFIG_LOWER(struct icp_qat_hw_decomp_51_config_csr_lower csr) +{ + u32 val32 = 0; + + QAT_FIELD_SET(val32, csr.lbc, + ICP_QAT_HW_DECOMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_BITPOS, + ICP_QAT_HW_DECOMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_MASK); + + return val32; +} + +struct icp_qat_hw_decomp_51_config_csr_upper { + enum icp_qat_hw_decomp_51_bms bms; +}; + +static inline u32 +ICP_QAT_FW_DECOMP_51_BUILD_CONFIG_UPPER(struct icp_qat_hw_decomp_51_config_csr_upper csr) +{ + u32 val32 = 0; + + QAT_FIELD_SET(val32, csr.bms, + ICP_QAT_HW_DECOMP_51_CONFIG_CSR_BMS_BITPOS, + ICP_QAT_HW_DECOMP_51_CONFIG_CSR_BMS_MASK); + + return val32; +} + +#endif /* ICP_QAT_HW_51_COMP_H_ */ diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp_defs.h b/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp_defs.h new file mode 100644 index 000000000000..e745688c5da4 --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/icp_qat_hw_51_comp_defs.h @@ -0,0 +1,318 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ICP_QAT_HW_51_COMP_DEFS_H_ +#define ICP_QAT_HW_51_COMP_DEFS_H_ + +#include + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SOM_CONTROL_BITPOS 28 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SOM_CONTROL_MASK GENMASK(1, 0) +enum icp_qat_hw_comp_51_som_control { + ICP_QAT_HW_COMP_51_SOM_CONTROL_NORMAL_MODE = 0x0, + ICP_QAT_HW_COMP_51_SOM_CONTROL_DICTIONARY_MODE = 0x1, + ICP_QAT_HW_COMP_51_SOM_CONTROL_INPUT_CRC = 0x2, + ICP_QAT_HW_COMP_51_SOM_CONTROL_RESERVED_MODE = 0x3, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SOM_CONTROL_NORMAL_MODE +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS 27 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_skip_hash_rd_control { + ICP_QAT_HW_COMP_51_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0, + ICP_QAT_HW_COMP_51_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SKIP_HASH_RD_CONTROL_NO_SKIP +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYPASS_COMPRESSION_BITPOS 25 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYPASS_COMPRESSION_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_bypass_compression { + ICP_QAT_HW_COMP_51_BYPASS_COMPRESSION_DISABLED = 0x0, + ICP_QAT_HW_COMP_51_BYPASS_COMPRESSION_ENABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYPASS_COMPRESSION_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_BYPASS_COMPRESSION_DISABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_DMM_ALGORITHM_BITPOS 22 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_DMM_ALGORITHM_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_dmm_algorithm { + ICP_QAT_HW_COMP_51_DMM_ALGORITHM_EDMM_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_DMM_ALGORITHM_ZSTD_DMM_LITE = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_DMM_ALGORITHM_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_DMM_ALGORITHM_EDMM_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_TOKEN_FUSION_INTERNAL_ONLY_BITPOS 21 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_TOKEN_FUSION_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_token_fusion_internal_only { + ICP_QAT_HW_COMP_51_TOKEN_FUSION_INTERNAL_ONLY_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_TOKEN_FUSION_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_TOKEN_FUSION_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_TOKEN_FUSION_INTERNAL_ONLY_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BMS_BITPOS 19 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BMS_MASK GENMASK(1, 0) +enum icp_qat_hw_comp_51_bms { + ICP_QAT_HW_COMP_51_BMS_BMS_64KB = 0x0, + ICP_QAT_HW_COMP_51_BMS_BMS_256KB = 0x1, + ICP_QAT_HW_COMP_51_BMS_BMS_1MB = 0x2, + ICP_QAT_HW_COMP_51_BMS_BMS_4MB = 0x3, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BMS_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_BMS_BMS_64KB +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS 18 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_scb_mode_reset_mask { + ICP_QAT_HW_COMP_51_SCB_MODE_RESET_MASK_DO_NOT_RESET_HB_HT = 0x0, + ICP_QAT_HW_COMP_51_SCB_MODE_RESET_MASK_RESET_HB_HT = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SCB_MODE_RESET_MASK_DO_NOT_RESET_HB_HT +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_BITPOS 2 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_zstd_frame_gen_dec_en { + ICP_QAT_HW_COMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_DISABLE = 0x0, + ICP_QAT_HW_COMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_ENABLE = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_ENABLE +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_CNV_DISABLE_BITPOS 1 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_CNV_DISABLE_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_cnv_disable { + ICP_QAT_HW_COMP_51_CNV_DISABLE_CNV_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_CNV_DISABLE_CNV_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_CNV_DISABLE_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_CNV_DISABLE_CNV_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ASB_DISABLE_BITPOS 0 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ASB_DISABLE_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_asb_disable { + ICP_QAT_HW_COMP_51_ASB_DISABLE_ASB_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_ASB_DISABLE_ASB_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ASB_DISABLE_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_ASB_DISABLE_ASB_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_BITPOS 21 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_spec_decoder_internal_only { + ICP_QAT_HW_COMP_51_SPEC_DECODER_INTERNAL_ONLY_NORMAL = 0x0, + ICP_QAT_HW_COMP_51_SPEC_DECODER_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SPEC_DECODER_INTERNAL_ONLY_NORMAL +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_BITPOS 20 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_mini_xcam_internal_only { + ICP_QAT_HW_COMP_51_MINI_XCAM_INTERNAL_ONLY_NORMAL = 0x0, + ICP_QAT_HW_COMP_51_MINI_XCAM_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_MINI_XCAM_INTERNAL_ONLY_NORMAL +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_REP_OFF_ENC_INTERNAL_ONLY_BITPOS 19 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_REP_OFF_ENC_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_rep_off_enc_internal_only { + ICP_QAT_HW_COMP_51_REP_OFF_ENC_INTERNAL_ONLY_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_REP_OFF_ENC_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_REP_OFF_ENC_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_REP_OFF_ENC_INTERNAL_ONLY_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_PROG_BLOCK_DROP_INTERNAL_ONLY_BITPOS 18 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_PROG_BLOCK_DROP_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_prog_block_drop_internal_only { + ICP_QAT_HW_COMP_51_PROG_BLOCK_DROP_INTERNAL_ONLY_DISABLE = 0x0, + ICP_QAT_HW_COMP_51_PROG_BLOCK_DROP_INTERNAL_ONLY_ENABLE = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_PROG_BLOCK_DROP_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_PROG_BLOCK_DROP_INTERNAL_ONLY_DISABLE +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_BITPOS 17 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_skip_hash_override_internal_only { + ICP_QAT_HW_COMP_51_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_DETERMINE_HASH_PARAMS = 0x0, + ICP_QAT_HW_COMP_51_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_OVERRIDE_HASH_PARAMS = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SKIP_HASH_OVERRIDE_INTERNAL_ONLY_DETERMINE_HASH_PARAMS +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_HBS_BITPOS 14 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_HBS_MASK GENMASK(2, 0) +enum icp_qat_hw_comp_51_hbs { + ICP_QAT_HW_COMP_51_HBS_32KB = 0x0, + ICP_QAT_HW_COMP_51_HBS_64KB = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_HBS_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_HBS_32KB +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ABD_BITPOS 13 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ABD_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_abd { + ICP_QAT_HW_COMP_51_ABD_ABD_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_ABD_ABD_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_ABD_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_ABD_ABD_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LLLBD_CTRL_BITPOS 12 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LLLBD_CTRL_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_lllbd_ctrl { + ICP_QAT_HW_COMP_51_LLLBD_CTRL_LLLBD_ENABLED = 0x0, + ICP_QAT_HW_COMP_51_LLLBD_CTRL_LLLBD_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_LLLBD_CTRL_LLLBD_ENABLED +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SEARCH_DEPTH_BITPOS 8 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SEARCH_DEPTH_MASK GENMASK(3, 0) +enum icp_qat_hw_comp_51_search_depth { + ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_1 = 0x1, + ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_6 = 0x3, + ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_9 = 0x4, + ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_10 = 0x4, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_1 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_FORMAT_BITPOS 5 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_FORMAT_MASK GENMASK(2, 0) +enum icp_qat_hw_comp_51_format { + ICP_QAT_HW_COMP_51_FORMAT_ILZ77 = 0x1, + ICP_QAT_HW_COMP_51_FORMAT_LZ4 = 0x2, + ICP_QAT_HW_COMP_51_FORMAT_LZ4s = 0x3, + ICP_QAT_HW_COMP_51_FORMAT_ZSTD = 0x4, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_FORMAT_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_FORMAT_ILZ77 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS 4 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MIN_MATCH_CONTROL_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_min_match_control { + ICP_QAT_HW_COMP_51_MIN_MATCH_CONTROL_MATCH_3B = 0x0, + ICP_QAT_HW_COMP_51_MIN_MATCH_CONTROL_MATCH_4B = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_MIN_MATCH_CONTROL_MATCH_3B +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS 3 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_COLLISION_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_skip_hash_collision { + ICP_QAT_HW_COMP_51_SKIP_HASH_COLLISION_ALLOW = 0x0, + ICP_QAT_HW_COMP_51_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SKIP_HASH_COLLISION_ALLOW +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS 2 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_UPDATE_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_skip_hash_update { + ICP_QAT_HW_COMP_51_SKIP_HASH_UPDATE_ALLOW = 0x0, + ICP_QAT_HW_COMP_51_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_SKIP_HASH_UPDATE_ALLOW +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYTE_SKIP_BITPOS 1 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYTE_SKIP_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_byte_skip { + ICP_QAT_HW_COMP_51_BYTE_SKIP_3BYTE_TOKEN = 0x0, + ICP_QAT_HW_COMP_51_BYTE_SKIP_3BYTE_LITERAL = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_BYTE_SKIP_3BYTE_TOKEN +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_BITPOS 0 +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_MASK GENMASK(0, 0) +enum icp_qat_hw_comp_51_lz4_block_checksum { + ICP_QAT_HW_COMP_51_LZ4_BLOCK_CHECKSUM_ABSENT = 0x0, + ICP_QAT_HW_COMP_51_LZ4_BLOCK_CHECKSUM_PRESENT = 0x1, +}; + +#define ICP_QAT_HW_COMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_DEFAULT_VAL \ + ICP_QAT_HW_COMP_51_LZ4_BLOCK_CHECKSUM_ABSENT +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_DISCARD_DATA_BITPOS 26 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_DISCARD_DATA_MASK GENMASK(0, 0) +enum icp_qat_hw_decomp_51_discard_data { + ICP_QAT_HW_DECOMP_51_DISCARD_DATA_DISABLED = 0x0, + ICP_QAT_HW_DECOMP_51_DISCARD_DATA_ENABLED = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_DISCARD_DATA_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_DISCARD_DATA_DISABLED +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_BMS_BITPOS 19 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_BMS_MASK GENMASK(1, 0) +enum icp_qat_hw_decomp_51_bms { + ICP_QAT_HW_DECOMP_51_BMS_BMS_64KB = 0x0, + ICP_QAT_HW_DECOMP_51_BMS_BMS_256KB = 0x1, + ICP_QAT_HW_DECOMP_51_BMS_BMS_1MB = 0x2, + ICP_QAT_HW_DECOMP_51_BMS_BMS_4MB = 0x3, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_BMS_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_BMS_BMS_64KB +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_BITPOS 2 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_MASK GENMASK(0, 0) +enum icp_qat_hw_decomp_51_zstd_frame_gen_dec_en { + ICP_QAT_HW_DECOMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_DISABLE = 0x0, + ICP_QAT_HW_DECOMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_ENABLE = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_ZSTD_FRAME_GEN_DEC_EN_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_ZSTD_FRAME_GEN_DEC_EN_ZSTD_FRAME_HDR_ENABLE +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_BITPOS 21 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_decomp_51_spec_decoder_internal_only { + ICP_QAT_HW_DECOMP_51_SPEC_DECODER_INTERNAL_ONLY_NORMAL = 0x0, + ICP_QAT_HW_DECOMP_51_SPEC_DECODER_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_SPEC_DECODER_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_SPEC_DECODER_INTERNAL_ONLY_NORMAL +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_BITPOS 20 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_MASK GENMASK(0, 0) +enum icp_qat_hw_decomp_51_mini_xcam_internal_only { + ICP_QAT_HW_DECOMP_51_MINI_XCAM_INTERNAL_ONLY_NORMAL = 0x0, + ICP_QAT_HW_DECOMP_51_MINI_XCAM_INTERNAL_ONLY_DISABLED = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_MINI_XCAM_INTERNAL_ONLY_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_MINI_XCAM_INTERNAL_ONLY_NORMAL +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_HBS_BITPOS 14 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_HBS_MASK GENMASK(2, 0) +enum icp_qat_hw_decomp_51_hbs { + ICP_QAT_HW_DECOMP_51_HBS_32KB = 0x0, + ICP_QAT_HW_DECOMP_51_HBS_64KB = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_HBS_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_HBS_32KB +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_FORMAT_BITPOS 5 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_FORMAT_MASK GENMASK(2, 0) +enum icp_qat_hw_decomp_51_format { + ICP_QAT_HW_DECOMP_51_FORMAT_ILZ77 = 0x1, + ICP_QAT_HW_DECOMP_51_FORMAT_LZ4 = 0x2, + ICP_QAT_HW_DECOMP_51_FORMAT_RESERVED = 0x3, + ICP_QAT_HW_DECOMP_51_FORMAT_ZSTD = 0x4, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_FORMAT_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_FORMAT_ILZ77 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_BITPOS 0 +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_MASK GENMASK(0, 0) +enum icp_qat_hw_decomp_51_lz4_block_checksum { + ICP_QAT_HW_DECOMP_51_LZ4_BLOCK_CHECKSUM_ABSENT = 0x0, + ICP_QAT_HW_DECOMP_51_LZ4_BLOCK_CHECKSUM_PRESENT = 0x1, +}; + +#define ICP_QAT_HW_DECOMP_51_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_51_LZ4_BLOCK_CHECKSUM_ABSENT + +#endif /* ICP_QAT_HW_51_COMP_DEFS_H_ */ From patchwork Wed Apr 30 11:34:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Kumar Chakraborty X-Patchwork-Id: 886554 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB1982586C1 for ; Wed, 30 Apr 2025 11:35:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012930; cv=none; b=gq7mbb8VMHEZ0D0IZp8zFOhZmZt1dr9HFfDTu8Waxgz9Twu9fdZm7eNyknXpB/P0nWlxGNYzhZrWFDP6KNVDGaRFgseGXYhXrxoO9tpGhudgbjjYmm9SSnJbQrQvN3/zQwyEDVzrepjs47h04pL2LJWSib07TaNQE5/BOr+NrnQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746012930; c=relaxed/simple; bh=hwLo/3WEedi5/Xs3uD+Agmqmi20Br/JAmWTB6if3cb0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=S0HW091qXJ6rN/kBu7WGJWcxqO62EW2aTn5e1dd2m1xAVvMUull0KX/5nfhi5Edv/uPiv5gROIyCq0IZt/uOUu5/r81Rzj044X2uobpfAwAvuhGsbrBFflcq5LShnKPy+To0vmob6p2bS/7irjlBs8C4sVw0uMjrYu04/sefI4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UQURQYwV; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UQURQYwV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1746012924; x=1777548924; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hwLo/3WEedi5/Xs3uD+Agmqmi20Br/JAmWTB6if3cb0=; b=UQURQYwVj72UsgHvYN5WgYXCbOgaRd4AMI78Jp4+14Wl3xEdSCM2DjIH rePkS1QGq9NQiQU7yfr7jaW7FsF+f45Ss6NGhFFLLTbI+dxed9l7nZL/U vsneeLO1fgbfJkwjvANeC3NMP2ghurwVSIyvQPopsm7y8V2QDFwQNz1oR fhHYQgnCXp0K1rahiGVZkFKXt0SgXfcvTQmO737hjJgbcYg3G3iSfswz3 x6k6rUfs59ImJbPSxJI+PR+3xTAXUvCHwkaVMobZMEFgJtxqaHXWQOKqH W3U6lmbN1+maRs8EFuTcydP5uBqfKyWu2Ekh/MT3SZNZ/ecqEmYv19cod A==; X-CSE-ConnectionGUID: umSK+EsFSkKT4ebz1RbRaA== X-CSE-MsgGUID: 7c9moTL8Sj2NhKRUm8QK5Q== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="51331170" X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="51331170" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2025 04:35:23 -0700 X-CSE-ConnectionGUID: 7958fPSBSaWLCuKQW6ucAw== X-CSE-MsgGUID: z55sPQVORiK7mwVEYPHCUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,251,1739865600"; d="scan'208";a="133812564" Received: from t21-qat.iind.intel.com ([10.49.15.35]) by orviesa009.jf.intel.com with ESMTP; 30 Apr 2025 04:35:21 -0700 From: Suman Kumar Chakraborty To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com Subject: [PATCH 11/11] crypto: qat - add qat_6xxx driver Date: Wed, 30 Apr 2025 12:34:53 +0100 Message-Id: <20250430113453.1587497-12-suman.kumar.chakraborty@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> References: <20250430113453.1587497-1-suman.kumar.chakraborty@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Laurent M Coquerel Add a new driver, qat_6xxx, to support QAT GEN6 devices. QAT GEN6 devices are a follow-on generation of GEN4 devices and differently from the previous generation, they can support all three services (symmetric, asymmetric, and data compression) concurrently. In order to have the qat_6xxx driver to reuse some of the GEN4 logic, a new abstraction layer has been introduced to bridge the two implementations. This allows to avoid code duplication and to keep the qat_6xxx driver isolated from the GEN4 logic. This approach has been used for the PF to VF logic and the HW CSR access logic. Signed-off-by: Laurent M Coquerel Co-developed-by: George Abraham P Signed-off-by: George Abraham P Co-developed-by: Karthikeyan Gopal Signed-off-by: Karthikeyan Gopal Co-developed-by: Suman Kumar Chakraborty Signed-off-by: Suman Kumar Chakraborty Reviewed-by: Giovanni Cabiddu --- drivers/crypto/intel/qat/Kconfig | 12 + drivers/crypto/intel/qat/Makefile | 1 + drivers/crypto/intel/qat/qat_6xxx/Makefile | 3 + .../intel/qat/qat_6xxx/adf_6xxx_hw_data.c | 843 ++++++++++++++++++ .../intel/qat/qat_6xxx/adf_6xxx_hw_data.h | 148 +++ drivers/crypto/intel/qat/qat_6xxx/adf_drv.c | 224 +++++ drivers/crypto/intel/qat/qat_common/Makefile | 1 + .../intel/qat/qat_common/adf_accel_devices.h | 2 + .../intel/qat/qat_common/adf_cfg_common.h | 1 + .../intel/qat/qat_common/adf_fw_config.h | 1 + .../crypto/intel/qat/qat_common/adf_gen6_pm.h | 28 + .../intel/qat/qat_common/adf_gen6_shared.c | 49 + .../intel/qat/qat_common/adf_gen6_shared.h | 15 + 13 files changed, 1328 insertions(+) create mode 100644 drivers/crypto/intel/qat/qat_6xxx/Makefile create mode 100644 drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c create mode 100644 drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h create mode 100644 drivers/crypto/intel/qat/qat_6xxx/adf_drv.c create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen6_pm.h create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen6_shared.c create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen6_shared.h diff --git a/drivers/crypto/intel/qat/Kconfig b/drivers/crypto/intel/qat/Kconfig index 02fb8abe4e6e..359c61f0c8a1 100644 --- a/drivers/crypto/intel/qat/Kconfig +++ b/drivers/crypto/intel/qat/Kconfig @@ -70,6 +70,18 @@ config CRYPTO_DEV_QAT_420XX To compile this as a module, choose M here: the module will be called qat_420xx. +config CRYPTO_DEV_QAT_6XXX + tristate "Support for Intel(R) QuickAssist Technology QAT_6XXX" + depends on (X86 || COMPILE_TEST) + depends on PCI + select CRYPTO_DEV_QAT + help + Support for Intel(R) QuickAssist Technology QAT_6xxx + for accelerating crypto and compression workloads. + + To compile this as a module, choose M here: the module + will be called qat_6xxx. + config CRYPTO_DEV_QAT_DH895xCCVF tristate "Support for Intel(R) DH895xCC Virtual Function" depends on PCI && (!CPU_BIG_ENDIAN || COMPILE_TEST) diff --git a/drivers/crypto/intel/qat/Makefile b/drivers/crypto/intel/qat/Makefile index 1eda8dc18515..abef14207afa 100644 --- a/drivers/crypto/intel/qat/Makefile +++ b/drivers/crypto/intel/qat/Makefile @@ -6,6 +6,7 @@ obj-$(CONFIG_CRYPTO_DEV_QAT_C3XXX) += qat_c3xxx/ obj-$(CONFIG_CRYPTO_DEV_QAT_C62X) += qat_c62x/ obj-$(CONFIG_CRYPTO_DEV_QAT_4XXX) += qat_4xxx/ obj-$(CONFIG_CRYPTO_DEV_QAT_420XX) += qat_420xx/ +obj-$(CONFIG_CRYPTO_DEV_QAT_6XXX) += qat_6xxx/ obj-$(CONFIG_CRYPTO_DEV_QAT_DH895xCCVF) += qat_dh895xccvf/ obj-$(CONFIG_CRYPTO_DEV_QAT_C3XXXVF) += qat_c3xxxvf/ obj-$(CONFIG_CRYPTO_DEV_QAT_C62XVF) += qat_c62xvf/ diff --git a/drivers/crypto/intel/qat/qat_6xxx/Makefile b/drivers/crypto/intel/qat/qat_6xxx/Makefile new file mode 100644 index 000000000000..4b4de67cb0c2 --- /dev/null +++ b/drivers/crypto/intel/qat/qat_6xxx/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_CRYPTO_DEV_QAT_6XXX) += qat_6xxx.o +qat_6xxx-y := adf_drv.o adf_6xxx_hw_data.o diff --git a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c new file mode 100644 index 000000000000..73d479383b1f --- /dev/null +++ b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c @@ -0,0 +1,843 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2025 Intel Corporation */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "adf_6xxx_hw_data.h" +#include "icp_qat_fw_comp.h" +#include "icp_qat_hw_51_comp.h" + +#define RP_GROUP_0_MASK (BIT(0) | BIT(2)) +#define RP_GROUP_1_MASK (BIT(1) | BIT(3)) +#define RP_GROUP_ALL_MASK (RP_GROUP_0_MASK | RP_GROUP_1_MASK) + +#define ADF_AE_GROUP_0 GENMASK(3, 0) +#define ADF_AE_GROUP_1 GENMASK(7, 4) +#define ADF_AE_GROUP_2 BIT(8) + +struct adf_ring_config { + u32 ring_mask; + enum adf_cfg_service_type ring_type; + const unsigned long *thrd_mask; +}; + +static u32 rmask_two_services[] = { + RP_GROUP_0_MASK, + RP_GROUP_1_MASK, +}; + +enum adf_gen6_rps { + RP0 = 0, + RP1 = 1, + RP2 = 2, + RP3 = 3, + RP_MAX = RP3 +}; + +/* + * thrd_mask_[sym|asym|cpr|dcc]: these static arrays define the thread + * configuration for handling requests of specific services across the + * accelerator engines. Each element in an array corresponds to an + * accelerator engine, with the value being a bitmask that specifies which + * threads within that engine are capable of processing the particular service. + * + * For example, a value of 0x0C means that threads 2 and 3 are enabled for the + * service in the respective accelerator engine. + */ +static const unsigned long thrd_mask_sym[ADF_6XXX_MAX_ACCELENGINES] = { + 0x0C, 0x0C, 0x0C, 0x0C, 0x1C, 0x1C, 0x1C, 0x1C, 0x00 +}; + +static const unsigned long thrd_mask_asym[ADF_6XXX_MAX_ACCELENGINES] = { + 0x70, 0x70, 0x70, 0x70, 0x60, 0x60, 0x60, 0x60, 0x00 +}; + +static const unsigned long thrd_mask_cpr[ADF_6XXX_MAX_ACCELENGINES] = { + 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00 +}; + +static const unsigned long thrd_mask_dcc[ADF_6XXX_MAX_ACCELENGINES] = { + 0x00, 0x00, 0x00, 0x00, 0x07, 0x07, 0x03, 0x03, 0x00 +}; + +static const char *const adf_6xxx_fw_objs[] = { + [ADF_FW_CY_OBJ] = ADF_6XXX_CY_OBJ, + [ADF_FW_DC_OBJ] = ADF_6XXX_DC_OBJ, + [ADF_FW_ADMIN_OBJ] = ADF_6XXX_ADMIN_OBJ, +}; + +static const struct adf_fw_config adf_default_fw_config[] = { + { ADF_AE_GROUP_1, ADF_FW_DC_OBJ }, + { ADF_AE_GROUP_0, ADF_FW_CY_OBJ }, + { ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ }, +}; + +static struct adf_hw_device_class adf_6xxx_class = { + .name = ADF_6XXX_DEVICE_NAME, + .type = DEV_6XXX, +}; + +static bool services_supported(unsigned long mask) +{ + int num_svc; + + if (mask >= BIT(SVC_BASE_COUNT)) + return false; + + num_svc = hweight_long(mask); + switch (num_svc) { + case ADF_ONE_SERVICE: + return true; + case ADF_TWO_SERVICES: + case ADF_THREE_SERVICES: + return !test_bit(SVC_DCC, &mask); + default: + return false; + } +} + +static int get_service(unsigned long *mask) +{ + if (test_and_clear_bit(SVC_ASYM, mask)) + return SVC_ASYM; + + if (test_and_clear_bit(SVC_SYM, mask)) + return SVC_SYM; + + if (test_and_clear_bit(SVC_DC, mask)) + return SVC_DC; + + if (test_and_clear_bit(SVC_DCC, mask)) + return SVC_DCC; + + return -EINVAL; +} + +static enum adf_cfg_service_type get_ring_type(enum adf_services service) +{ + switch (service) { + case SVC_SYM: + return SYM; + case SVC_ASYM: + return ASYM; + case SVC_DC: + case SVC_DCC: + return COMP; + default: + return UNUSED; + } +} + +static const unsigned long *get_thrd_mask(enum adf_services service) +{ + switch (service) { + case SVC_SYM: + return thrd_mask_sym; + case SVC_ASYM: + return thrd_mask_asym; + case SVC_DC: + return thrd_mask_cpr; + case SVC_DCC: + return thrd_mask_dcc; + default: + return NULL; + } +} + +static int get_rp_config(struct adf_accel_dev *accel_dev, struct adf_ring_config *rp_config, + unsigned int *num_services) +{ + unsigned int i, nservices; + unsigned long mask; + int ret, service; + + ret = adf_get_service_mask(accel_dev, &mask); + if (ret) + return ret; + + nservices = hweight_long(mask); + if (nservices > MAX_NUM_CONCURR_SVC) + return -EINVAL; + + for (i = 0; i < nservices; i++) { + service = get_service(&mask); + if (service < 0) + return service; + + rp_config[i].ring_type = get_ring_type(service); + rp_config[i].thrd_mask = get_thrd_mask(service); + + /* + * If there is only one service enabled, use all ring pairs for + * that service. + * If there are two services enabled, use ring pairs 0 and 2 for + * one service and ring pairs 1 and 3 for the other service. + */ + switch (nservices) { + case ADF_ONE_SERVICE: + rp_config[i].ring_mask = RP_GROUP_ALL_MASK; + break; + case ADF_TWO_SERVICES: + rp_config[i].ring_mask = rmask_two_services[i]; + break; + case ADF_THREE_SERVICES: + rp_config[i].ring_mask = BIT(i); + + /* If ASYM is enabled, use additional ring pair */ + if (service == SVC_ASYM) + rp_config[i].ring_mask |= BIT(RP3); + + break; + default: + return -EINVAL; + } + } + + *num_services = nservices; + + return 0; +} + +static u32 adf_gen6_get_arb_mask(struct adf_accel_dev *accel_dev, unsigned int ae) +{ + struct adf_ring_config rp_config[MAX_NUM_CONCURR_SVC]; + unsigned int num_services, i, thrd; + u32 ring_mask, thd2arb_mask = 0; + const unsigned long *p_mask; + + if (get_rp_config(accel_dev, rp_config, &num_services)) + return 0; + + /* + * The thd2arb_mask maps ring pairs to threads within an accelerator engine. + * It ensures that jobs submitted to ring pairs are scheduled on threads capable + * of handling the specified service type. + * + * Each group of 4 bits in the mask corresponds to a thread, with each bit + * indicating whether a job from a ring pair can be scheduled on that thread. + * The use of 4 bits is due to the organization of ring pairs into groups of + * four, where each group shares the same configuration. + */ + for (i = 0; i < num_services; i++) { + p_mask = &rp_config[i].thrd_mask[ae]; + ring_mask = rp_config[i].ring_mask; + + for_each_set_bit(thrd, p_mask, ADF_NUM_THREADS_PER_AE) + thd2arb_mask |= ring_mask << (thrd * 4); + } + + return thd2arb_mask; +} + +static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev) +{ + enum adf_cfg_service_type rps[ADF_GEN6_NUM_BANKS_PER_VF] = { }; + struct adf_ring_config rp_config[MAX_NUM_CONCURR_SVC]; + unsigned int num_services, rp_num, i; + unsigned long cfg_mask; + u16 ring_to_svc_map; + + if (get_rp_config(accel_dev, rp_config, &num_services)) + return 0; + + /* + * Loop through the configured services and populate the `rps` array that + * contains what service that particular ring pair can handle (i.e. symmetric + * crypto, asymmetric crypto, data compression or compression chaining). + */ + for (i = 0; i < num_services; i++) { + cfg_mask = rp_config[i].ring_mask; + for_each_set_bit(rp_num, &cfg_mask, ADF_GEN6_NUM_BANKS_PER_VF) + rps[rp_num] = rp_config[i].ring_type; + } + + /* + * The ring_mask is structured into segments of 3 bits, with each + * segment representing the service configuration for a specific ring pair. + * Since ring pairs are organized into groups of 4, the ring_mask contains 4 + * such 3-bit segments, each corresponding to one ring pair. + * + * The device has 64 ring pairs, which are organized in groups of 4, namely + * 16 groups. Each group has the same configuration, represented here by + * `ring_to_svc_map`. + */ + ring_to_svc_map = rps[RP0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT | + rps[RP1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT | + rps[RP2] << ADF_CFG_SERV_RING_PAIR_2_SHIFT | + rps[RP3] << ADF_CFG_SERV_RING_PAIR_3_SHIFT; + + return ring_to_svc_map; +} + +static u32 get_accel_mask(struct adf_hw_device_data *self) +{ + return ADF_GEN6_ACCELERATORS_MASK; +} + +static u32 get_num_accels(struct adf_hw_device_data *self) +{ + return ADF_GEN6_MAX_ACCELERATORS; +} + +static u32 get_num_aes(struct adf_hw_device_data *self) +{ + return self ? hweight32(self->ae_mask) : 0; +} + +static u32 get_misc_bar_id(struct adf_hw_device_data *self) +{ + return ADF_GEN6_PMISC_BAR; +} + +static u32 get_etr_bar_id(struct adf_hw_device_data *self) +{ + return ADF_GEN6_ETR_BAR; +} + +static u32 get_sram_bar_id(struct adf_hw_device_data *self) +{ + return ADF_GEN6_SRAM_BAR; +} + +static enum dev_sku_info get_sku(struct adf_hw_device_data *self) +{ + return DEV_SKU_1; +} + +static void get_arb_info(struct arb_info *arb_info) +{ + arb_info->arb_cfg = ADF_GEN6_ARB_CONFIG; + arb_info->arb_offset = ADF_GEN6_ARB_OFFSET; + arb_info->wt2sam_offset = ADF_GEN6_ARB_WRK_2_SER_MAP_OFFSET; +} + +static void get_admin_info(struct admin_info *admin_csrs_info) +{ + admin_csrs_info->mailbox_offset = ADF_GEN6_MAILBOX_BASE_OFFSET; + admin_csrs_info->admin_msg_ur = ADF_GEN6_ADMINMSGUR_OFFSET; + admin_csrs_info->admin_msg_lr = ADF_GEN6_ADMINMSGLR_OFFSET; +} + +static u32 get_heartbeat_clock(struct adf_hw_device_data *self) +{ + return ADF_GEN6_COUNTER_FREQ; +} + +static void enable_error_correction(struct adf_accel_dev *accel_dev) +{ + void __iomem *csr = adf_get_pmisc_base(accel_dev); + + /* + * Enable all error notification bits in errsou3 except VFLR + * notification on host. + */ + ADF_CSR_WR(csr, ADF_GEN6_ERRMSK3, ADF_GEN6_VFLNOTIFY); +} + +static void enable_ints(struct adf_accel_dev *accel_dev) +{ + void __iomem *addr = adf_get_pmisc_base(accel_dev); + + /* Enable bundle interrupts */ + ADF_CSR_WR(addr, ADF_GEN6_SMIAPF_RP_X0_MASK_OFFSET, 0); + ADF_CSR_WR(addr, ADF_GEN6_SMIAPF_RP_X1_MASK_OFFSET, 0); + + /* Enable misc interrupts */ + ADF_CSR_WR(addr, ADF_GEN6_SMIAPF_MASK_OFFSET, 0); +} + +static void set_ssm_wdtimer(struct adf_accel_dev *accel_dev) +{ + void __iomem *addr = adf_get_pmisc_base(accel_dev); + u64 val_pke = ADF_SSM_WDT_PKE_DEFAULT_VALUE; + u64 val = ADF_SSM_WDT_DEFAULT_VALUE; + + /* Enable watchdog timer for sym and dc */ + ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTATHL_OFFSET, ADF_SSMWDTATHH_OFFSET, val); + ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTCNVL_OFFSET, ADF_SSMWDTCNVH_OFFSET, val); + ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTUCSL_OFFSET, ADF_SSMWDTUCSH_OFFSET, val); + ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTDCPRL_OFFSET, ADF_SSMWDTDCPRH_OFFSET, val); + + /* Enable watchdog timer for pke */ + ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTPKEL_OFFSET, ADF_SSMWDTPKEH_OFFSET, val_pke); +} + +/* + * The vector routing table is used to select the MSI-X entry to use for each + * interrupt source. + * The first ADF_GEN6_ETR_MAX_BANKS entries correspond to ring interrupts. + * The final entry corresponds to VF2PF or error interrupts. + * This vector table could be used to configure one MSI-X entry to be shared + * between multiple interrupt sources. + * + * The default routing is set to have a one to one correspondence between the + * interrupt source and the MSI-X entry used. + */ +static void set_msix_default_rttable(struct adf_accel_dev *accel_dev) +{ + void __iomem *csr = adf_get_pmisc_base(accel_dev); + unsigned int i; + + for (i = 0; i <= ADF_GEN6_ETR_MAX_BANKS; i++) + ADF_CSR_WR(csr, ADF_GEN6_MSIX_RTTABLE_OFFSET(i), i); +} + +static int reset_ring_pair(void __iomem *csr, u32 bank_number) +{ + u32 status; + int ret; + + /* + * Write rpresetctl register BIT(0) as 1. + * Since rpresetctl registers have no RW fields, no need to preserve + * values for other bits. Just write directly. + */ + ADF_CSR_WR(csr, ADF_WQM_CSR_RPRESETCTL(bank_number), + ADF_WQM_CSR_RPRESETCTL_RESET); + + /* Read rpresetsts register and wait for rp reset to complete */ + ret = read_poll_timeout(ADF_CSR_RD, status, + status & ADF_WQM_CSR_RPRESETSTS_STATUS, + ADF_RPRESET_POLL_DELAY_US, + ADF_RPRESET_POLL_TIMEOUT_US, true, + csr, ADF_WQM_CSR_RPRESETSTS(bank_number)); + if (ret) + return ret; + + /* When ring pair reset is done, clear rpresetsts */ + ADF_CSR_WR(csr, ADF_WQM_CSR_RPRESETSTS(bank_number), ADF_WQM_CSR_RPRESETSTS_STATUS); + + return 0; +} + +static int ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number) +{ + struct adf_hw_device_data *hw_data = accel_dev->hw_device; + void __iomem *csr = adf_get_etr_base(accel_dev); + int ret; + + if (bank_number >= hw_data->num_banks) + return -EINVAL; + + dev_dbg(&GET_DEV(accel_dev), "ring pair reset for bank:%d\n", bank_number); + + ret = reset_ring_pair(csr, bank_number); + if (ret) + dev_err(&GET_DEV(accel_dev), "ring pair reset failed (timeout)\n"); + else + dev_dbg(&GET_DEV(accel_dev), "ring pair reset successful\n"); + + return ret; +} + +static int build_comp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_hw_comp_51_config_csr_lower hw_comp_lower_csr = { }; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + u32 lower_val; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DYNAMIC; + break; + default: + return -EINVAL; + } + + hw_comp_lower_csr.lllbd = ICP_QAT_HW_COMP_51_LLLBD_CTRL_LLLBD_DISABLED; + hw_comp_lower_csr.sd = ICP_QAT_HW_COMP_51_SEARCH_DEPTH_LEVEL_1; + lower_val = ICP_QAT_FW_COMP_51_BUILD_CONFIG_LOWER(hw_comp_lower_csr); + cd_pars->u.sl.comp_slice_cfg_word[0] = lower_val; + cd_pars->u.sl.comp_slice_cfg_word[1] = 0; + + return 0; +} + +static int build_decomp_block(void *ctx, enum adf_dc_algo algo) +{ + struct icp_qat_fw_comp_req *req_tmpl = ctx; + struct icp_qat_fw_comp_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + + switch (algo) { + case QAT_DEFLATE: + header->service_cmd_id = ICP_QAT_FW_COMP_CMD_DECOMPRESS; + break; + default: + return -EINVAL; + } + + cd_pars->u.sl.comp_slice_cfg_word[0] = 0; + cd_pars->u.sl.comp_slice_cfg_word[1] = 0; + + return 0; +} + +static void adf_gen6_init_dc_ops(struct adf_dc_ops *dc_ops) +{ + dc_ops->build_comp_block = build_comp_block; + dc_ops->build_decomp_block = build_decomp_block; +} + +static int adf_gen6_init_thd2arb_map(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); + u32 *thd2arb_map = hw_data->thd_to_arb_map; + unsigned int i; + + for (i = 0; i < hw_data->num_engines; i++) { + thd2arb_map[i] = adf_gen6_get_arb_mask(accel_dev, i); + dev_dbg(&GET_DEV(accel_dev), "ME:%d arb_mask:%#x\n", i, thd2arb_map[i]); + } + + return 0; +} + +static void set_vc_csr_for_bank(void __iomem *csr, u32 bank_number) +{ + u32 value; + + /* + * After each PF FLR, for each of the 64 ring pairs in the PF, the + * driver must program the ringmodectl CSRs. + */ + value = ADF_CSR_RD(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number)); + value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_MASK, ADF_GEN6_RINGMODECTL_TC_DEFAULT); + value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_EN_MASK, ADF_GEN6_RINGMODECTL_TC_EN_OP1); + ADF_CSR_WR(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number), value); +} + +static int set_vc_config(struct adf_accel_dev *accel_dev) +{ + struct pci_dev *pdev = accel_to_pci_dev(accel_dev); + u32 value; + int err; + + /* + * After each PF FLR, the driver must program the Port Virtual Channel (VC) + * Control Registers. + * Read PVC0CTL then write the masked values. + */ + pci_read_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, &value); + value |= FIELD_PREP(ADF_GEN6_PVC0CTL_TCVCMAP_MASK, ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT); + err = pci_write_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, value); + if (err) { + dev_err(&GET_DEV(accel_dev), "pci write to PVC0CTL failed\n"); + return pcibios_err_to_errno(err); + } + + /* Read PVC1CTL then write masked values */ + pci_read_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, &value); + value |= FIELD_PREP(ADF_GEN6_PVC1CTL_TCVCMAP_MASK, ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT); + value |= FIELD_PREP(ADF_GEN6_PVC1CTL_VCEN_MASK, ADF_GEN6_PVC1CTL_VCEN_ON); + err = pci_write_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, value); + if (err) + dev_err(&GET_DEV(accel_dev), "pci write to PVC1CTL failed\n"); + + return pcibios_err_to_errno(err); +} + +static int adf_gen6_set_vc(struct adf_accel_dev *accel_dev) +{ + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); + void __iomem *csr = adf_get_etr_base(accel_dev); + u32 i; + + for (i = 0; i < hw_data->num_banks; i++) { + dev_dbg(&GET_DEV(accel_dev), "set virtual channels for bank:%d\n", i); + set_vc_csr_for_bank(csr, i); + } + + return set_vc_config(accel_dev); +} + +static u32 get_ae_mask(struct adf_hw_device_data *self) +{ + unsigned long fuses = self->fuses[ADF_FUSECTL4]; + u32 mask = ADF_6XXX_ACCELENGINES_MASK; + + /* + * If bit 0 is set in the fuses, the first 4 engines are disabled. + * If bit 4 is set, the second group of 4 engines are disabled. + * If bit 8 is set, the admin engine (bit 8) is disabled. + */ + if (test_bit(0, &fuses)) + mask &= ~ADF_AE_GROUP_0; + + if (test_bit(4, &fuses)) + mask &= ~ADF_AE_GROUP_1; + + if (test_bit(8, &fuses)) + mask &= ~ADF_AE_GROUP_2; + + return mask; +} + +static u32 get_accel_cap(struct adf_accel_dev *accel_dev) +{ + u32 capabilities_sym, capabilities_asym; + u32 capabilities_dc; + unsigned long mask; + u32 caps = 0; + u32 fusectl1; + + fusectl1 = GET_HW_DATA(accel_dev)->fuses[ADF_FUSECTL1]; + + /* Read accelerator capabilities mask */ + capabilities_sym = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC | + ICP_ACCEL_CAPABILITIES_CIPHER | + ICP_ACCEL_CAPABILITIES_AUTHENTICATION | + ICP_ACCEL_CAPABILITIES_SHA3 | + ICP_ACCEL_CAPABILITIES_SHA3_EXT | + ICP_ACCEL_CAPABILITIES_CHACHA_POLY | + ICP_ACCEL_CAPABILITIES_AESGCM_SPC | + ICP_ACCEL_CAPABILITIES_AES_V2; + + /* A set bit in fusectl1 means the corresponding feature is OFF in this SKU */ + if (fusectl1 & ICP_ACCEL_GEN6_MASK_UCS_SLICE) { + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CHACHA_POLY; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AESGCM_SPC; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AES_V2; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER; + } + if (fusectl1 & ICP_ACCEL_GEN6_MASK_AUTH_SLICE) { + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_SHA3; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_SHA3_EXT; + capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER; + } + + capabilities_asym = 0; + + capabilities_dc = ICP_ACCEL_CAPABILITIES_COMPRESSION | + ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION | + ICP_ACCEL_CAPABILITIES_LZ4S_COMPRESSION | + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64; + + if (fusectl1 & ICP_ACCEL_GEN6_MASK_CPR_SLICE) { + capabilities_dc &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION; + capabilities_dc &= ~ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION; + capabilities_dc &= ~ICP_ACCEL_CAPABILITIES_LZ4S_COMPRESSION; + capabilities_dc &= ~ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64; + } + + if (adf_get_service_mask(accel_dev, &mask)) + return 0; + + if (test_bit(SVC_ASYM, &mask)) + caps |= capabilities_asym; + if (test_bit(SVC_SYM, &mask)) + caps |= capabilities_sym; + if (test_bit(SVC_DC, &mask)) + caps |= capabilities_dc; + if (test_bit(SVC_DCC, &mask)) { + /* + * Sym capabilities are available for chaining operations, + * but sym crypto instances cannot be supported + */ + caps = capabilities_dc | capabilities_sym; + caps &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC; + } + + return caps; +} + +static u32 uof_get_num_objs(struct adf_accel_dev *accel_dev) +{ + return ARRAY_SIZE(adf_default_fw_config); +} + +static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num) +{ + int num_fw_objs = ARRAY_SIZE(adf_6xxx_fw_objs); + int id; + + id = adf_default_fw_config[obj_num].obj; + if (id >= num_fw_objs) + return NULL; + + return adf_6xxx_fw_objs[id]; +} + +static const char *uof_get_name_6xxx(struct adf_accel_dev *accel_dev, u32 obj_num) +{ + return uof_get_name(accel_dev, obj_num); +} + +static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num) +{ + if (obj_num >= uof_get_num_objs(accel_dev)) + return -EINVAL; + + return adf_default_fw_config[obj_num].obj; +} + +static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num) +{ + return adf_default_fw_config[obj_num].ae_mask; +} + +static const u32 *adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev) +{ + if (adf_gen6_init_thd2arb_map(accel_dev)) + dev_warn(&GET_DEV(accel_dev), + "Failed to generate thread to arbiter mapping"); + + return GET_HW_DATA(accel_dev)->thd_to_arb_map; +} + +static int adf_init_device(struct adf_accel_dev *accel_dev) +{ + void __iomem *addr = adf_get_pmisc_base(accel_dev); + u32 status; + u32 csr; + int ret; + + /* Temporarily mask PM interrupt */ + csr = ADF_CSR_RD(addr, ADF_GEN6_ERRMSK2); + csr |= ADF_GEN6_PM_SOU; + ADF_CSR_WR(addr, ADF_GEN6_ERRMSK2, csr); + + /* Set DRV_ACTIVE bit to power up the device */ + ADF_CSR_WR(addr, ADF_GEN6_PM_INTERRUPT, ADF_GEN6_PM_DRV_ACTIVE); + + /* Poll status register to make sure the device is powered up */ + ret = read_poll_timeout(ADF_CSR_RD, status, + status & ADF_GEN6_PM_INIT_STATE, + ADF_GEN6_PM_POLL_DELAY_US, + ADF_GEN6_PM_POLL_TIMEOUT_US, true, addr, + ADF_GEN6_PM_STATUS); + if (ret) { + dev_err(&GET_DEV(accel_dev), "Failed to power up the device\n"); + return ret; + } + + dev_dbg(&GET_DEV(accel_dev), "Setting virtual channels for device qat_dev%d\n", + accel_dev->accel_id); + + ret = adf_gen6_set_vc(accel_dev); + if (ret) + dev_err(&GET_DEV(accel_dev), "Failed to set virtual channels\n"); + + return ret; +} + +static int enable_pm(struct adf_accel_dev *accel_dev) +{ + return adf_init_admin_pm(accel_dev, ADF_GEN6_PM_DEFAULT_IDLE_FILTER); +} + +static int dev_config(struct adf_accel_dev *accel_dev) +{ + int ret; + + ret = adf_cfg_section_add(accel_dev, ADF_KERNEL_SEC); + if (ret) + return ret; + + ret = adf_cfg_section_add(accel_dev, "Accelerator0"); + if (ret) + return ret; + + switch (adf_get_service_enabled(accel_dev)) { + case SVC_DC: + case SVC_DCC: + ret = adf_gen6_comp_dev_config(accel_dev); + break; + default: + ret = adf_gen6_no_dev_config(accel_dev); + break; + } + if (ret) + return ret; + + __set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); + + return ret; +} + +void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data) +{ + hw_data->dev_class = &adf_6xxx_class; + hw_data->instance_id = adf_6xxx_class.instances++; + hw_data->num_banks = ADF_GEN6_ETR_MAX_BANKS; + hw_data->num_banks_per_vf = ADF_GEN6_NUM_BANKS_PER_VF; + hw_data->num_rings_per_bank = ADF_GEN6_NUM_RINGS_PER_BANK; + hw_data->num_accel = ADF_GEN6_MAX_ACCELERATORS; + hw_data->num_engines = ADF_6XXX_MAX_ACCELENGINES; + hw_data->num_logical_accel = 1; + hw_data->tx_rx_gap = ADF_GEN6_RX_RINGS_OFFSET; + hw_data->tx_rings_mask = ADF_GEN6_TX_RINGS_MASK; + hw_data->ring_to_svc_map = 0; + hw_data->alloc_irq = adf_isr_resource_alloc; + hw_data->free_irq = adf_isr_resource_free; + hw_data->enable_error_correction = enable_error_correction; + hw_data->get_accel_mask = get_accel_mask; + hw_data->get_ae_mask = get_ae_mask; + hw_data->get_num_accels = get_num_accels; + hw_data->get_num_aes = get_num_aes; + hw_data->get_sram_bar_id = get_sram_bar_id; + hw_data->get_etr_bar_id = get_etr_bar_id; + hw_data->get_misc_bar_id = get_misc_bar_id; + hw_data->get_arb_info = get_arb_info; + hw_data->get_admin_info = get_admin_info; + hw_data->get_accel_cap = get_accel_cap; + hw_data->get_sku = get_sku; + hw_data->init_admin_comms = adf_init_admin_comms; + hw_data->exit_admin_comms = adf_exit_admin_comms; + hw_data->send_admin_init = adf_send_admin_init; + hw_data->init_arb = adf_init_arb; + hw_data->exit_arb = adf_exit_arb; + hw_data->get_arb_mapping = adf_get_arbiter_mapping; + hw_data->enable_ints = enable_ints; + hw_data->reset_device = adf_reset_flr; + hw_data->admin_ae_mask = ADF_6XXX_ADMIN_AE_MASK; + hw_data->fw_name = ADF_6XXX_FW; + hw_data->fw_mmp_name = ADF_6XXX_MMP; + hw_data->uof_get_name = uof_get_name_6xxx; + hw_data->uof_get_num_objs = uof_get_num_objs; + hw_data->uof_get_obj_type = uof_get_obj_type; + hw_data->uof_get_ae_mask = uof_get_ae_mask; + hw_data->set_msix_rttable = set_msix_default_rttable; + hw_data->set_ssm_wdtimer = set_ssm_wdtimer; + hw_data->get_ring_to_svc_map = get_ring_to_svc_map; + hw_data->disable_iov = adf_disable_sriov; + hw_data->ring_pair_reset = ring_pair_reset; + hw_data->dev_config = dev_config; + hw_data->get_hb_clock = get_heartbeat_clock; + hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE; + hw_data->start_timer = adf_timer_start; + hw_data->stop_timer = adf_timer_stop; + hw_data->init_device = adf_init_device; + hw_data->enable_pm = enable_pm; + hw_data->services_supported = services_supported; + + adf_gen6_init_hw_csr_ops(&hw_data->csr_ops); + adf_gen6_init_pf_pfvf_ops(&hw_data->pfvf_ops); + adf_gen6_init_dc_ops(&hw_data->dc_ops); +} + +void adf_clean_hw_data_6xxx(struct adf_hw_device_data *hw_data) +{ + if (hw_data->dev_class->instances) + hw_data->dev_class->instances--; +} diff --git a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h new file mode 100644 index 000000000000..78e2e2c5816e --- /dev/null +++ b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ADF_6XXX_HW_DATA_H_ +#define ADF_6XXX_HW_DATA_H_ + +#include +#include +#include + +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" +#include "adf_dc.h" + +/* PCIe configuration space */ +#define ADF_GEN6_BAR_MASK (BIT(0) | BIT(2) | BIT(4)) +#define ADF_GEN6_SRAM_BAR 0 +#define ADF_GEN6_PMISC_BAR 1 +#define ADF_GEN6_ETR_BAR 2 +#define ADF_6XXX_MAX_ACCELENGINES 9 + +/* Clocks frequency */ +#define ADF_GEN6_COUNTER_FREQ (100 * HZ_PER_MHZ) + +/* Physical function fuses */ +#define ADF_GEN6_FUSECTL0_OFFSET 0x2C8 +#define ADF_GEN6_FUSECTL1_OFFSET 0x2CC +#define ADF_GEN6_FUSECTL4_OFFSET 0x2D8 + +/* Accelerators */ +#define ADF_GEN6_ACCELERATORS_MASK 0x1 +#define ADF_GEN6_MAX_ACCELERATORS 1 + +/* MSI-X interrupt */ +#define ADF_GEN6_SMIAPF_RP_X0_MASK_OFFSET 0x41A040 +#define ADF_GEN6_SMIAPF_RP_X1_MASK_OFFSET 0x41A044 +#define ADF_GEN6_SMIAPF_MASK_OFFSET 0x41A084 +#define ADF_GEN6_MSIX_RTTABLE_OFFSET(i) (0x409000 + ((i) * 4)) + +/* Bank and ring configuration */ +#define ADF_GEN6_NUM_RINGS_PER_BANK 2 +#define ADF_GEN6_NUM_BANKS_PER_VF 4 +#define ADF_GEN6_ETR_MAX_BANKS 64 +#define ADF_GEN6_RX_RINGS_OFFSET 1 +#define ADF_GEN6_TX_RINGS_MASK 0x1 + +/* Arbiter configuration */ +#define ADF_GEN6_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0)) +#define ADF_GEN6_ARB_OFFSET 0x000 +#define ADF_GEN6_ARB_WRK_2_SER_MAP_OFFSET 0x400 + +/* Admin interface configuration */ +#define ADF_GEN6_ADMINMSGUR_OFFSET 0x500574 +#define ADF_GEN6_ADMINMSGLR_OFFSET 0x500578 +#define ADF_GEN6_MAILBOX_BASE_OFFSET 0x600970 + +/* + * Watchdog timers + * Timeout is in cycles. Clock speed may vary across products but this + * value should be a few milli-seconds. + */ +#define ADF_SSM_WDT_DEFAULT_VALUE 0x7000000ULL +#define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x8000000ULL +#define ADF_SSMWDTATHL_OFFSET 0x5208 +#define ADF_SSMWDTATHH_OFFSET 0x520C +#define ADF_SSMWDTCNVL_OFFSET 0x5408 +#define ADF_SSMWDTCNVH_OFFSET 0x540C +#define ADF_SSMWDTUCSL_OFFSET 0x5808 +#define ADF_SSMWDTUCSH_OFFSET 0x580C +#define ADF_SSMWDTDCPRL_OFFSET 0x5A08 +#define ADF_SSMWDTDCPRH_OFFSET 0x5A0C +#define ADF_SSMWDTPKEL_OFFSET 0x5E08 +#define ADF_SSMWDTPKEH_OFFSET 0x5E0C + +/* Ring reset */ +#define ADF_RPRESET_POLL_TIMEOUT_US (5 * USEC_PER_SEC) +#define ADF_RPRESET_POLL_DELAY_US 20 +#define ADF_WQM_CSR_RPRESETCTL_RESET BIT(0) +#define ADF_WQM_CSR_RPRESETCTL(bank) (0x6000 + (bank) * 8) +#define ADF_WQM_CSR_RPRESETSTS_STATUS BIT(0) +#define ADF_WQM_CSR_RPRESETSTS(bank) (ADF_WQM_CSR_RPRESETCTL(bank) + 4) + +/* Controls and sets up the corresponding ring mode of operation */ +#define ADF_GEN6_CSR_RINGMODECTL(bank) (0x9000 + (bank) * 4) + +/* Specifies the traffic class to use for the transactions to/from the ring */ +#define ADF_GEN6_RINGMODECTL_TC_MASK GENMASK(18, 16) +#define ADF_GEN6_RINGMODECTL_TC_DEFAULT 0x7 + +/* Specifies usage of tc for the transactions to/from this ring */ +#define ADF_GEN6_RINGMODECTL_TC_EN_MASK GENMASK(20, 19) + +/* + * Use the value programmed in the tc field for request descriptor + * and metadata read transactions + */ +#define ADF_GEN6_RINGMODECTL_TC_EN_OP1 0x1 + +/* VC0 Resource Control Register */ +#define ADF_GEN6_PVC0CTL_OFFSET 0x204 +#define ADF_GEN6_PVC0CTL_TCVCMAP_OFFSET 1 +#define ADF_GEN6_PVC0CTL_TCVCMAP_MASK GENMASK(7, 1) +#define ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT 0x7F + +/* VC1 Resource Control Register */ +#define ADF_GEN6_PVC1CTL_OFFSET 0x210 +#define ADF_GEN6_PVC1CTL_TCVCMAP_OFFSET 1 +#define ADF_GEN6_PVC1CTL_TCVCMAP_MASK GENMASK(7, 1) +#define ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT 0x40 +#define ADF_GEN6_PVC1CTL_VCEN_OFFSET 31 +#define ADF_GEN6_PVC1CTL_VCEN_MASK BIT(31) +/* RW bit: 0x1 - enables a Virtual Channel, 0x0 - disables */ +#define ADF_GEN6_PVC1CTL_VCEN_ON 0x1 + +/* Error source mask registers */ +#define ADF_GEN6_ERRMSK0 0x41A210 +#define ADF_GEN6_ERRMSK1 0x41A214 +#define ADF_GEN6_ERRMSK2 0x41A218 +#define ADF_GEN6_ERRMSK3 0x41A21C + +#define ADF_GEN6_VFLNOTIFY BIT(7) + +/* Number of heartbeat counter pairs */ +#define ADF_NUM_HB_CNT_PER_AE ADF_NUM_THREADS_PER_AE + +/* Physical function fuses */ +#define ADF_6XXX_ACCELENGINES_MASK GENMASK(8, 0) +#define ADF_6XXX_ADMIN_AE_MASK GENMASK(8, 8) + +/* Firmware binaries */ +#define ADF_6XXX_FW "qat_6xxx.bin" +#define ADF_6XXX_MMP "qat_6xxx_mmp.bin" +#define ADF_6XXX_CY_OBJ "qat_6xxx_cy.bin" +#define ADF_6XXX_DC_OBJ "qat_6xxx_dc.bin" +#define ADF_6XXX_ADMIN_OBJ "qat_6xxx_admin.bin" + +enum icp_qat_gen6_slice_mask { + ICP_ACCEL_GEN6_MASK_UCS_SLICE = BIT(0), + ICP_ACCEL_GEN6_MASK_AUTH_SLICE = BIT(1), + ICP_ACCEL_GEN6_MASK_PKE_SLICE = BIT(2), + ICP_ACCEL_GEN6_MASK_CPR_SLICE = BIT(3), + ICP_ACCEL_GEN6_MASK_DCPRZ_SLICE = BIT(4), + ICP_ACCEL_GEN6_MASK_WCP_WAT_SLICE = BIT(6), +}; + +void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data); +void adf_clean_hw_data_6xxx(struct adf_hw_device_data *hw_data); + +#endif /* ADF_6XXX_HW_DATA_H_ */ diff --git a/drivers/crypto/intel/qat/qat_6xxx/adf_drv.c b/drivers/crypto/intel/qat/qat_6xxx/adf_drv.c new file mode 100644 index 000000000000..2531c337e0dd --- /dev/null +++ b/drivers/crypto/intel/qat/qat_6xxx/adf_drv.c @@ -0,0 +1,224 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2025 Intel Corporation */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "adf_gen6_shared.h" +#include "adf_6xxx_hw_data.h" + +static int bar_map[] = { + 0, /* SRAM */ + 2, /* PMISC */ + 4, /* ETR */ +}; + +static void adf_device_down(void *accel_dev) +{ + adf_dev_down(accel_dev); +} + +static void adf_dbgfs_cleanup(void *accel_dev) +{ + adf_dbgfs_exit(accel_dev); +} + +static void adf_cfg_device_remove(void *accel_dev) +{ + adf_cfg_dev_remove(accel_dev); +} + +static void adf_cleanup_hw_data(void *accel_dev) +{ + struct adf_accel_dev *accel_device = accel_dev; + + if (accel_device->hw_device) { + adf_clean_hw_data_6xxx(accel_device->hw_device); + accel_device->hw_device = NULL; + } +} + +static void adf_devmgr_remove(void *accel_dev) +{ + adf_devmgr_rm_dev(accel_dev, NULL); +} + +static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) +{ + struct adf_accel_pci *accel_pci_dev; + struct adf_hw_device_data *hw_data; + struct device *dev = &pdev->dev; + struct adf_accel_dev *accel_dev; + struct adf_bar *bar; + unsigned int i; + int ret; + + if (num_possible_nodes() > 1 && dev_to_node(dev) < 0) { + /* + * If the accelerator is connected to a node with no memory + * there is no point in using the accelerator since the remote + * memory transaction will be very slow. + */ + return dev_err_probe(dev, -EINVAL, "Invalid NUMA configuration.\n"); + } + + accel_dev = devm_kzalloc(dev, sizeof(*accel_dev), GFP_KERNEL); + if (!accel_dev) + return -ENOMEM; + + INIT_LIST_HEAD(&accel_dev->crypto_list); + INIT_LIST_HEAD(&accel_dev->list); + accel_pci_dev = &accel_dev->accel_pci_dev; + accel_pci_dev->pci_dev = pdev; + accel_dev->owner = THIS_MODULE; + + hw_data = devm_kzalloc(dev, sizeof(*hw_data), GFP_KERNEL); + if (!hw_data) + return -ENOMEM; + + pci_read_config_byte(pdev, PCI_REVISION_ID, &accel_pci_dev->revid); + pci_read_config_dword(pdev, ADF_GEN6_FUSECTL4_OFFSET, &hw_data->fuses[ADF_FUSECTL4]); + pci_read_config_dword(pdev, ADF_GEN6_FUSECTL0_OFFSET, &hw_data->fuses[ADF_FUSECTL0]); + pci_read_config_dword(pdev, ADF_GEN6_FUSECTL1_OFFSET, &hw_data->fuses[ADF_FUSECTL1]); + + if (!(hw_data->fuses[ADF_FUSECTL1] & ICP_ACCEL_GEN6_MASK_WCP_WAT_SLICE)) + return dev_err_probe(dev, -EFAULT, "Wireless mode is not supported.\n"); + + /* Enable PCI device */ + ret = pcim_enable_device(pdev); + if (ret) + return dev_err_probe(dev, ret, "Cannot enable PCI device.\n"); + + ret = adf_devmgr_add_dev(accel_dev, NULL); + if (ret) + return dev_err_probe(dev, ret, "Failed to add new accelerator device.\n"); + + ret = devm_add_action_or_reset(dev, adf_devmgr_remove, accel_dev); + if (ret) + return ret; + + accel_dev->hw_device = hw_data; + adf_init_hw_data_6xxx(accel_dev->hw_device); + + ret = devm_add_action_or_reset(dev, adf_cleanup_hw_data, accel_dev); + if (ret) + return ret; + + /* Get Accelerators and Accelerator Engine masks */ + hw_data->accel_mask = hw_data->get_accel_mask(hw_data); + hw_data->ae_mask = hw_data->get_ae_mask(hw_data); + accel_pci_dev->sku = hw_data->get_sku(hw_data); + + /* If the device has no acceleration engines then ignore it */ + if (!hw_data->accel_mask || !hw_data->ae_mask || + (~hw_data->ae_mask & ADF_GEN6_ACCELERATORS_MASK)) { + ret = -EFAULT; + return dev_err_probe(dev, ret, "No acceleration units were found.\n"); + } + + /* Create device configuration table */ + ret = adf_cfg_dev_add(accel_dev); + if (ret) + return ret; + + ret = devm_add_action_or_reset(dev, adf_cfg_device_remove, accel_dev); + if (ret) + return ret; + + /* Set DMA identifier */ + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (ret) + return dev_err_probe(dev, ret, "No usable DMA configuration.\n"); + + ret = adf_gen6_cfg_dev_init(accel_dev); + if (ret) + return dev_err_probe(dev, ret, "Failed to initialize configuration.\n"); + + /* Get accelerator capability mask */ + hw_data->accel_capabilities_mask = hw_data->get_accel_cap(accel_dev); + if (!hw_data->accel_capabilities_mask) { + ret = -EINVAL; + return dev_err_probe(dev, ret, "Failed to get capabilities mask.\n"); + } + + for (i = 0; i < ARRAY_SIZE(bar_map); i++) { + bar = &accel_pci_dev->pci_bars[i]; + + /* Map 64-bit PCIe BAR */ + bar->virt_addr = pcim_iomap_region(pdev, bar_map[i], pci_name(pdev)); + if (!bar->virt_addr) { + ret = -ENOMEM; + return dev_err_probe(dev, ret, "Failed to ioremap PCI region.\n"); + } + } + + pci_set_master(pdev); + + /* + * The PCI config space is saved at this point and will be restored + * after a Function Level Reset (FLR) as the FLR does not completely + * restore it. + */ + ret = pci_save_state(pdev); + if (ret) + return dev_err_probe(dev, ret, "Failed to save pci state.\n"); + + adf_dbgfs_init(accel_dev); + + ret = devm_add_action_or_reset(dev, adf_dbgfs_cleanup, accel_dev); + if (ret) + return ret; + + ret = adf_dev_up(accel_dev, true); + if (ret) + return ret; + + ret = devm_add_action_or_reset(dev, adf_device_down, accel_dev); + if (ret) + return ret; + + ret = adf_sysfs_init(accel_dev); + + return ret; +} + +static void adf_shutdown(struct pci_dev *pdev) +{ + struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev); + + adf_dev_down(accel_dev); +} + +static const struct pci_device_id adf_pci_tbl[] = { + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_QAT_6XXX) }, + { } +}; +MODULE_DEVICE_TABLE(pci, adf_pci_tbl); + +static struct pci_driver adf_driver = { + .id_table = adf_pci_tbl, + .name = ADF_6XXX_DEVICE_NAME, + .probe = adf_probe, + .shutdown = adf_shutdown, + .sriov_configure = adf_sriov_configure, + .err_handler = &adf_err_handler, +}; +module_pci_driver(adf_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Intel"); +MODULE_FIRMWARE(ADF_6XXX_FW); +MODULE_FIRMWARE(ADF_6XXX_MMP); +MODULE_DESCRIPTION("Intel(R) QuickAssist Technology for GEN6 Devices"); +MODULE_SOFTDEP("pre: crypto-intel_qat"); +MODULE_IMPORT_NS("CRYPTO_QAT"); diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile index 0a9da7398a78..958f9a8ac6b3 100644 --- a/drivers/crypto/intel/qat/qat_common/Makefile +++ b/drivers/crypto/intel/qat/qat_common/Makefile @@ -19,6 +19,7 @@ intel_qat-y := adf_accel_engine.o \ adf_gen4_pm.o \ adf_gen4_ras.o \ adf_gen4_vf_mig.o \ + adf_gen6_shared.o \ adf_hw_arbiter.o \ adf_init.o \ adf_isr.o \ diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h index ed8b85360573..2ee526063213 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h +++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h @@ -26,6 +26,7 @@ #define ADF_C3XXXVF_DEVICE_NAME "c3xxxvf" #define ADF_4XXX_DEVICE_NAME "4xxx" #define ADF_420XX_DEVICE_NAME "420xx" +#define ADF_6XXX_DEVICE_NAME "6xxx" #define PCI_DEVICE_ID_INTEL_QAT_4XXX 0x4940 #define PCI_DEVICE_ID_INTEL_QAT_4XXXIOV 0x4941 #define PCI_DEVICE_ID_INTEL_QAT_401XX 0x4942 @@ -35,6 +36,7 @@ #define PCI_DEVICE_ID_INTEL_QAT_420XX 0x4946 #define PCI_DEVICE_ID_INTEL_QAT_420XXIOV 0x4947 #define PCI_DEVICE_ID_INTEL_QAT_6XXX 0x4948 +#define PCI_DEVICE_ID_INTEL_QAT_6XXX_IOV 0x4949 #define ADF_DEVICE_FUSECTL_OFFSET 0x40 #define ADF_DEVICE_LEGFUSE_OFFSET 0x4C diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_common.h b/drivers/crypto/intel/qat/qat_common/adf_cfg_common.h index 89df3888d7ea..15fdf9854b81 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_cfg_common.h +++ b/drivers/crypto/intel/qat/qat_common/adf_cfg_common.h @@ -48,6 +48,7 @@ enum adf_device_type { DEV_C3XXXVF, DEV_4XXX, DEV_420XX, + DEV_6XXX, }; struct adf_dev_status_info { diff --git a/drivers/crypto/intel/qat/qat_common/adf_fw_config.h b/drivers/crypto/intel/qat/qat_common/adf_fw_config.h index 4f86696800c9..78957fa900b7 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_fw_config.h +++ b/drivers/crypto/intel/qat/qat_common/adf_fw_config.h @@ -8,6 +8,7 @@ enum adf_fw_objs { ADF_FW_ASYM_OBJ, ADF_FW_DC_OBJ, ADF_FW_ADMIN_OBJ, + ADF_FW_CY_OBJ, }; struct adf_fw_config { diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen6_pm.h b/drivers/crypto/intel/qat/qat_common/adf_gen6_pm.h new file mode 100644 index 000000000000..9a5b995f7ada --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/adf_gen6_pm.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ADF_GEN6_PM_H +#define ADF_GEN6_PM_H + +#include +#include + +struct adf_accel_dev; + +/* Power management */ +#define ADF_GEN6_PM_POLL_DELAY_US 20 +#define ADF_GEN6_PM_POLL_TIMEOUT_US USEC_PER_SEC +#define ADF_GEN6_PM_STATUS 0x50A00C +#define ADF_GEN6_PM_INTERRUPT 0x50A028 + +/* Power management source in ERRSOU2 and ERRMSK2 */ +#define ADF_GEN6_PM_SOU BIT(18) + +/* cpm_pm_interrupt bitfields */ +#define ADF_GEN6_PM_DRV_ACTIVE BIT(20) + +#define ADF_GEN6_PM_DEFAULT_IDLE_FILTER 0x6 + +/* cpm_pm_status bitfields */ +#define ADF_GEN6_PM_INIT_STATE BIT(21) + +#endif /* ADF_GEN6_PM_H */ diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.c b/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.c new file mode 100644 index 000000000000..58a072e2f936 --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2025 Intel Corporation */ +#include + +#include "adf_gen4_config.h" +#include "adf_gen4_hw_csr_data.h" +#include "adf_gen4_pfvf.h" +#include "adf_gen6_shared.h" + +struct adf_accel_dev; +struct adf_pfvf_ops; +struct adf_hw_csr_ops; + +/* + * QAT GEN4 and GEN6 devices often differ in terms of supported features, + * options and internal logic. However, some of the mechanisms and register + * layout are shared between those two GENs. This file serves as an abstraction + * layer that allows to use existing GEN4 implementation that is also + * applicable to GEN6 without additional overhead and complexity. + */ +void adf_gen6_init_pf_pfvf_ops(struct adf_pfvf_ops *pfvf_ops) +{ + adf_gen4_init_pf_pfvf_ops(pfvf_ops); +} +EXPORT_SYMBOL_GPL(adf_gen6_init_pf_pfvf_ops); + +void adf_gen6_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops) +{ + return adf_gen4_init_hw_csr_ops(csr_ops); +} +EXPORT_SYMBOL_GPL(adf_gen6_init_hw_csr_ops); + +int adf_gen6_cfg_dev_init(struct adf_accel_dev *accel_dev) +{ + return adf_gen4_cfg_dev_init(accel_dev); +} +EXPORT_SYMBOL_GPL(adf_gen6_cfg_dev_init); + +int adf_gen6_comp_dev_config(struct adf_accel_dev *accel_dev) +{ + return adf_comp_dev_config(accel_dev); +} +EXPORT_SYMBOL_GPL(adf_gen6_comp_dev_config); + +int adf_gen6_no_dev_config(struct adf_accel_dev *accel_dev) +{ + return adf_no_dev_config(accel_dev); +} +EXPORT_SYMBOL_GPL(adf_gen6_no_dev_config); diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.h b/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.h new file mode 100644 index 000000000000..bc8e71e984fc --- /dev/null +++ b/drivers/crypto/intel/qat/qat_common/adf_gen6_shared.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2025 Intel Corporation */ +#ifndef ADF_GEN6_SHARED_H_ +#define ADF_GEN6_SHARED_H_ + +struct adf_hw_csr_ops; +struct adf_accel_dev; +struct adf_pfvf_ops; + +void adf_gen6_init_pf_pfvf_ops(struct adf_pfvf_ops *pfvf_ops); +void adf_gen6_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); +int adf_gen6_cfg_dev_init(struct adf_accel_dev *accel_dev); +int adf_gen6_comp_dev_config(struct adf_accel_dev *accel_dev); +int adf_gen6_no_dev_config(struct adf_accel_dev *accel_dev); +#endif/* ADF_GEN6_SHARED_H_ */