From patchwork Thu Feb 25 18:27:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387231 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp544386jap; Thu, 25 Feb 2021 10:28:44 -0800 (PST) X-Google-Smtp-Source: ABdhPJx11SliYinVK5XnQCnXfqsEiIR+boHB7fQ3c6EIuQZax4GKb09yRA0kprdCzkz7uWcpPT2O X-Received: by 2002:a05:6402:95d:: with SMTP id h29mr4392639edz.327.1614277724338; Thu, 25 Feb 2021 10:28:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277724; cv=none; d=google.com; s=arc-20160816; b=iHRae8ScKcSdi7vBbthv2vA6hMOmh7VhJbSgRfP5+iA0b2aaUQY5XnVq7JobfkNhU0 tuZpa11eTphkoMYtxmjTUFRlG0idZXMboDeKv2W8hGVjmJaDPi7xbW6JOENT2eMzQHpH XeAHMXcjQcVqwcJlhx3ZycRVX4WrMgb+/UYzSux7sI5ZREVdutm2pZxrT3Om7DNItIWt e4NUR94FM12VSHwbEdvu+JPxB1cVN8l56ygb7kUA8HU2rvcMnY/2sEnRYb06GlEWaprH gfvNlAGscTWqBq6JMTeR6uf57puyU/JAtbQszcdesSw2EM3qC9nghgrXpLCoIxhwm3Uq igYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iTjTmQs8FYf7whYqEO4Ey2Ll+wNPA9ydSS+PI2wxbPs=; b=AxJ3A/2KvZdEB0nn4tJnsOhw6KeBX7JNKwonY1TXIYuktpywagu1fsT0HT6ef5z82a t2OB4beX9pr0+TyM6COKwEODym333nuIZtMHkBlT+mVPQEwEidU9+thXNrZIJTYA+fSd YupyMqF5dC/v2qoxI7MMhJuZ7gWIsut0Ksl4dEsJJ77L4BT6X4bgPT8IBYf4QiqU2wSZ k73PGscDHZjx0vtr7hO4AgaLutAG7O+tx8axzVVrwxU4Sg7IV2YiA+iGLOtKBpeaaOdI 4NsOdX6AGNq9RF7v0bN9vqFUIuHRo0NPU4rMoR94AuhVxcJNYCWnF6Xvb49j2Ht9P5im lUlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="LQs/7wSi"; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t8si3545767edr.132.2021.02.25.10.28.44; Thu, 25 Feb 2021 10:28:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="LQs/7wSi"; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233007AbhBYS2W (ORCPT + 2 others); Thu, 25 Feb 2021 13:28:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233043AbhBYS2B (ORCPT ); Thu, 25 Feb 2021 13:28:01 -0500 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8129AC061788 for ; Thu, 25 Feb 2021 10:27:20 -0800 (PST) Received: by mail-qt1-x82b.google.com with SMTP id s15so4837137qtq.0 for ; Thu, 25 Feb 2021 10:27:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iTjTmQs8FYf7whYqEO4Ey2Ll+wNPA9ydSS+PI2wxbPs=; b=LQs/7wSi5ayT1xC3drFlgo5hTOQxq4lCUpUfFb+Gtgzmy5L0AibS0IH4ahRWb/YAEc /SJoertEMDV5n6fNkek2XzsV2cwY2UM2Lbl1soJrlYOF9oHZn64foAUH3qKRkjz5dhQt 0jv1zMLV7an/HY4sBvW/G8cuvZ583Fsz/4Qbvqf5D8rquvabFaULkSwYS/d+0+1W/mH7 sXmJrYsUWc33rqK0d15VdAmHu9MD9Ivcc4w8rGZgOpY7a/8Mx5xC4jE06CP7vnlzS+Ze yYdtTsWmuMpWk3ntdoXOgAV1urtkPLd6vAgWA9rjsFd7tbNdHm1jt9xRxY+gy1R5mG+f m3qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iTjTmQs8FYf7whYqEO4Ey2Ll+wNPA9ydSS+PI2wxbPs=; b=pR+JABaPuTSmpkFJ/074AP11md5HE9cUUWIaiM3RMBzsU4IEmAjpYP4DW9cbt5R9w1 HpjKqQgljQZ2b30Vcfh8Du3s5m73Cex78weC2gTU2VlU3/q0gbxTstKSn2Z9+ZVEyQ6E QNwuTHgUp9xHKOEzQJqFVOpBQjfVtl9RjXxQ/JmmKQ/n4NiO2VOszQEGJWs4s9rdsnJv pwyG+pfZXoDY3BQATOKsH7Bg7ptGhiO/99Ik3FZk53SZYuLIxQNposHk6mmX/4VYdl/L ouhH4HX0dGOBU2dJMiUo8q17SSg1rXv+x3LhsajXo34wORVcuDizgA/9LvvA1Iaga1MY iLNQ== X-Gm-Message-State: AOAM530l41HIaOVaT/hGNj/2FfnZZM854xHqg3u8CLXr2ck114LByb9S Dmp+WZUI67mknHk7FcWJFqylDw== X-Received: by 2002:ac8:3902:: with SMTP id s2mr3780784qtb.26.1614277638261; Thu, 25 Feb 2021 10:27:18 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:17 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/7] crypto: qce: common: Add MAC failed error checking Date: Thu, 25 Feb 2021 13:27:10 -0500 Message-Id: <20210225182716.1402449-2-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org MAC_FAILED gets set in the status register if authenthication fails for ccm algorithms(during decryption). Add support to catch and flag this error. Signed-off-by: Thara Gopinath --- drivers/crypto/qce/common.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index dceb9579d87a..7c3cb483749e 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -403,7 +403,8 @@ int qce_start(struct crypto_async_request *async_req, u32 type) } #define STATUS_ERRORS \ - (BIT(SW_ERR_SHIFT) | BIT(AXI_ERR_SHIFT) | BIT(HSD_ERR_SHIFT)) + (BIT(SW_ERR_SHIFT) | BIT(AXI_ERR_SHIFT) | \ + BIT(HSD_ERR_SHIFT) | BIT(MAC_FAILED_SHIFT)) int qce_check_status(struct qce_device *qce, u32 *status) { @@ -417,8 +418,12 @@ int qce_check_status(struct qce_device *qce, u32 *status) * use result_status from result dump the result_status needs to be byte * swapped, since we set the device to little endian. */ - if (*status & STATUS_ERRORS || !(*status & BIT(OPERATION_DONE_SHIFT))) - ret = -ENXIO; + if (*status & STATUS_ERRORS || !(*status & BIT(OPERATION_DONE_SHIFT))) { + if (*status & BIT(MAC_FAILED_SHIFT)) + ret = -EBADMSG; + else + ret = -ENXIO; + } return ret; } From patchwork Thu Feb 25 18:27:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387233 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp544416jap; Thu, 25 Feb 2021 10:28:46 -0800 (PST) X-Google-Smtp-Source: ABdhPJy6bRjoaGheN8h+6BXQtMArMakMCGrSU85x6DfIUTgpXCcKT+MKhoJfzsLYTkIpYW5Ia1L0 X-Received: by 2002:a05:6402:158d:: with SMTP id c13mr4262049edv.297.1614277726252; Thu, 25 Feb 2021 10:28:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277726; cv=none; d=google.com; s=arc-20160816; b=HhQ+Q2wTsP7KCk1cDz88p5PJmd5094oS3+mWNMSso8KxX8R+ZaX8FbjbKD/YP0oHZF tfhUWKLjRAHqnTD5IHaGd7zlOWjiVGGBP5U3JbWaXVJH32LrZfhAN3QMgm8mLLrwgYyW w9vko/dB7Hk7RYSO+y3WFGMML5iNyqQRwHoigdKlPBjmw2UFYHYTPoOFIowe6HyzBzIN 9p9X3DDdCbCzI7PPrDZCt7IiwTNQdgYcdYGmjBxxRK5lHxVQJXF+M63Ab4LzV7VGOV8m OrelqrqimWrdyIxMFhswAN0HeigGi/aVuf8rs3/xxe1g2paLluGyp4sBEoa6ZkQxZvR5 eNUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sVEXsmQpwU6APdK8kMYl/K7qANFQ0+6DyJKL9jqKzxg=; b=CayHyR97Rw52mt3s9pKdD6k4RT8vGCpMbL0/GACqN4PwazkOXENZHDSeTQvMzfRtyt ezYKTjkl8t4VNeLi17rFWb3aEDQll7Qp6pqOOo2YRW4vVYf7LKMIdqOaf/ubkglld7xx J53+aVYHsKZ37GumSE336HNPql8iCG9ERsit40JhC3QOwxPp3qhV9KWFbaOboAUIHCOo C7VWooMyoAL1GqTRKtUEPtbNY61HfCoruAvnkHRv0QvfWyBrf1q5YMzo5d4xesPOaTMH 4b85+rVO0hNVqv+alK/dyWauNU6Ly7h9BpgXRWGH7jrMXdAk/iGPtpA0oTK6WlG3UZMf jg5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pUKHfj47; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t8si3545767edr.132.2021.02.25.10.28.46; Thu, 25 Feb 2021 10:28:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pUKHfj47; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233236AbhBYS2f (ORCPT + 2 others); Thu, 25 Feb 2021 13:28:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232984AbhBYS2V (ORCPT ); Thu, 25 Feb 2021 13:28:21 -0500 Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com [IPv6:2607:f8b0:4864:20::82f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40EA5C06178C for ; Thu, 25 Feb 2021 10:27:21 -0800 (PST) Received: by mail-qt1-x82f.google.com with SMTP id o34so4790368qtd.11 for ; Thu, 25 Feb 2021 10:27:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sVEXsmQpwU6APdK8kMYl/K7qANFQ0+6DyJKL9jqKzxg=; b=pUKHfj47wizhy5IKgaDEE9DOxykWVRArBU0jimsKgvTtAq4HZ8sjKs9NwVFbL/s92t 4Nr3JYprNegQd1uWE/kwkTJCHngwKymAFYdzLul5x5dzQbqrAUMAbt5be43Kbr/tJ9sc vo5Ack0QH4Ug9sg9PQPuWPSj+G3Ji2E2MAP0QlMf4D51oO8lkPkBMNsCm1FSV/Rl+8vg bez5E3hCp7Rr+/LbE5wi2f8IdLLB2TQ0xrODABXh4S7L8zTjSlKrya2AJ4rai+py0dkN AZxhYrY486Zcx+iGvEBsxCzpvUNIktaZwI4FUTgWS34AVaKFZD5547yx748VaTSubSnZ eRCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sVEXsmQpwU6APdK8kMYl/K7qANFQ0+6DyJKL9jqKzxg=; b=MhjPjUn7dbcxIDMDa6iTTVHjZcvg1U3aFaijCGccXgvsSArHWBMovkdiBIh78VzlfZ fvk37bzRhCeGamyxyD1FHcTa4XI1P0oHXYCtHfSpWOfK8NhL4O5BOF1aTRma9apozgG9 NmbobPmtqhqqVF4D7XXNDHUEpMOWgdZ/Le8iUK+iZF7B1SnOTQgvt1AD/xRID16lKxO5 VsbMXIZLJxkov5CFSjsL+u3WzQ+vk4SQkWppj5OGZxe5bhZI9jKKScQCbvHAT2o/XS6s IWqGweuO4eg1YBANuQJIaDUTNUQNEAohm1KNqDWyM90N+4v83EXzMksHK7xQycQtGhwN 4DEg== X-Gm-Message-State: AOAM530Wno3lC2Urs5/DQVNaP2YmTdzfOsrJpLMzpnVaIy/CwTH7708A Y2A6WCN8F/ZyS20YzY1kMGR5jA== X-Received: by 2002:a05:622a:248:: with SMTP id c8mr3597978qtx.122.1614277639012; Thu, 25 Feb 2021 10:27:19 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:18 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/7] crypto: qce: common: Make result dump optional Date: Thu, 25 Feb 2021 13:27:11 -0500 Message-Id: <20210225182716.1402449-3-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Qualcomm crypto engine allows for IV registers and status register to be concatenated to the output. This option is enabled by setting the RESULTS_DUMP field in GOPROC register. This is useful for most of the algorithms to either retrieve status of operation or in case of authentication algorithms to retrieve the mac. But for ccm algorithms, the mac is part of the output stream and not retrieved from the IV registers, thus needing a separate buffer to retrieve it. Make enabling RESULTS_DUMP field optional so that algorithms can choose whether or not to enable the option. Note that in this patch, the enabled algorithms always choose RESULTS_DUMP to be enabled. But later with the introduction of ccm algorithms, this changes. Signed-off-by: Thara Gopinath --- drivers/crypto/qce/common.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) -- 2.25.1 Reviewed-by: Bjorn Andersson diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index 7c3cb483749e..2485aa371d83 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -88,9 +88,12 @@ static void qce_setup_config(struct qce_device *qce) qce_write(qce, REG_CONFIG, config); } -static inline void qce_crypto_go(struct qce_device *qce) +static inline void qce_crypto_go(struct qce_device *qce, bool result_dump) { - qce_write(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT)); + if (result_dump) + qce_write(qce, REG_GOPROC, BIT(GO_SHIFT) | BIT(RESULTS_DUMP_SHIFT)); + else + qce_write(qce, REG_GOPROC, BIT(GO_SHIFT)); } #ifdef CONFIG_CRYPTO_DEV_QCE_SHA @@ -219,7 +222,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req) config = qce_config_reg(qce, 1); qce_write(qce, REG_CONFIG, config); - qce_crypto_go(qce); + qce_crypto_go(qce, true); return 0; } @@ -380,7 +383,7 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req) config = qce_config_reg(qce, 1); qce_write(qce, REG_CONFIG, config); - qce_crypto_go(qce); + qce_crypto_go(qce, true); return 0; } From patchwork Thu Feb 25 18:27:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387232 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp544395jap; Thu, 25 Feb 2021 10:28:44 -0800 (PST) X-Google-Smtp-Source: ABdhPJzXH53I2H1N7TBeH3WdYDPHB4iJmZSTrAto7qgCeM131ezj0apVN44BulJHj3XQRtDJAzxW X-Received: by 2002:aa7:d64f:: with SMTP id v15mr4341253edr.358.1614277724666; Thu, 25 Feb 2021 10:28:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277724; cv=none; d=google.com; s=arc-20160816; b=mNKTD9QRkufdy+2+2If/B2g9bKYkVDfzZ/1MNn8LFEzgnl3dkYz6Vbcuo0gH/qp3KE WIhL6AVj8L7WkEdCcaaA8MmqbtPP6dU6LCrIHPyEDjcbzV3T8IXkWQEMOSZEjLnNBu/+ lkGqe0kqRSH4nnZaRhKU/pQ7mdICsG1U1t8u6u+zGGY0Pbj4+WzM08VwleAwDles6eQ4 DPfVIB1l0QA3vi7WotN7giWC7FN7Y3LueFcRMg9qxOiuuSX+mrbepJzi/f3AhkUxRKLR olgyrrzpdHMcjIBVSKx9AYA/XwOJSxBMDjXR/rHuQZo/8cdNBvYGrkv8qkyrI+m/OtNk g+rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JQJdSOWgWhcKarBNWRMV6VTAdv1Rbh5d+ABZaoC/+W4=; b=UotTVjRVhQlHSQXpk17SgvcXWHvMCjBvtwmPPhrArEgeJMYTWyCBZ7KnV9k6VysMSj v/isPy/M3XqJVSi+XZqkkiKHfJ7YgsuU0UeEgX09kejxn4ognagPG+4BysGMlhFiSaLO M17bUNAaDGYLGSoNi4QgrUVkCpPOHtO7OYI1JU0SpduFKW4IlnV6EujSroon9h3x7p/y VrBNPDweV0fKK0gt5RayzatUyjUEvsr4QdJu0EGLRBQYekE4rFJE9Ouuf9iHd6ln/gNB ak1eNj+uvx5Fdo3U9dKsAj9RYeyD8hfpDEhFp44xQR93DIEjuxCWXNk3ousTm9ySWk8t LENQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ieGfgpWt; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t8si3545767edr.132.2021.02.25.10.28.44; Thu, 25 Feb 2021 10:28:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ieGfgpWt; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233043AbhBYS2Z (ORCPT + 2 others); Thu, 25 Feb 2021 13:28:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232929AbhBYS2V (ORCPT ); Thu, 25 Feb 2021 13:28:21 -0500 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 934AEC06178B for ; Thu, 25 Feb 2021 10:27:20 -0800 (PST) Received: by mail-qk1-x736.google.com with SMTP id z128so6543418qkc.12 for ; Thu, 25 Feb 2021 10:27:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JQJdSOWgWhcKarBNWRMV6VTAdv1Rbh5d+ABZaoC/+W4=; b=ieGfgpWtW+joVg+4cYRKzAzmyjHDhD/RbPnvpA0pTSDhSgZMN5sutEVKWuQE8IPrRk /tVi09ePFnUFgc8JbDUDR23avTi0pES3XhofNKSQXpVDgzPMDhPxjpBwSWThO1MWfXvu MHXzMAJgKpTRGcNDLcabQUlnI4bnKaOnzvtoBnNCuVKaJmqQ7Ibwx1t2bFiYBMEOXq+L YEMeQRbo8AyVSLQ4xnVbyQUoPMAu46W38ApntOdqDJbloy1nsDd8uex7XQfrWT7RpC+I h7OqLpbB2/a65pMsf/+6c9REufx4/KTmFLBJcO8lIPe4oup3uTWcl08MTNkfczv+Plog HdKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JQJdSOWgWhcKarBNWRMV6VTAdv1Rbh5d+ABZaoC/+W4=; b=aEN6t9CIYRXwJorh6orzk2iBGNMI+OS4TpOa+KOi2oo0zNZRsV/FLUrqTNkGczmdol s5IW2utrhOf9+8HfHeqOY1FasuaHJo5IsHESwGz+W6EkeEXkxeaeBFMip4xNYtYdGXDF UN6vDYvEKebOSFRYLvGvVmCawW/sV8Y1NHerzmxWKuim+BsPeo+gg815tTxJ3xWmgUPI EOpOFEUpQ8AuSfVQV0vwk62q+wxgZ6xnnBIwzTATXgu9NMZe/ZPOdQnXD6mA6LJKuRCx nRg8Kh5gJ5UgJV4jc4WqVRsECSkxQ5wGP5sfDbBt9D0GsaAr1Y4/fgVLwVmtPB81pGP4 BQ/w== X-Gm-Message-State: AOAM5317KAO9uwEzfBjnviS+Y0qT1xkCaCuUXXFo/UIDguSwNTm5Si/s bQ4mc1cfalOv5BHq1Ir36fgwrw== X-Received: by 2002:a05:620a:a19:: with SMTP id i25mr4037095qka.136.1614277639762; Thu, 25 Feb 2021 10:27:19 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:19 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/7] crypto: qce: Add mode for rfc4309 Date: Thu, 25 Feb 2021 13:27:12 -0500 Message-Id: <20210225182716.1402449-4-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org rf4309 is the specification that uses aes ccm algorithms with IPsec security packets. Add a submode to identify rfc4309 ccm(aes) algorithm in the crypto driver. Signed-off-by: Thara Gopinath --- drivers/crypto/qce/common.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) -- 2.25.1 diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h index 3bc244bcca2d..3ffe719b79e4 100644 --- a/drivers/crypto/qce/common.h +++ b/drivers/crypto/qce/common.h @@ -51,9 +51,11 @@ #define QCE_MODE_CCM BIT(12) #define QCE_MODE_MASK GENMASK(12, 8) +#define QCE_MODE_CCM_RFC4309 BIT(13) + /* cipher encryption/decryption operations */ -#define QCE_ENCRYPT BIT(13) -#define QCE_DECRYPT BIT(14) +#define QCE_ENCRYPT BIT(14) +#define QCE_DECRYPT BIT(15) #define IS_DES(flags) (flags & QCE_ALG_DES) #define IS_3DES(flags) (flags & QCE_ALG_3DES) @@ -73,6 +75,7 @@ #define IS_CTR(mode) (mode & QCE_MODE_CTR) #define IS_XTS(mode) (mode & QCE_MODE_XTS) #define IS_CCM(mode) (mode & QCE_MODE_CCM) +#define IS_CCM_RFC4309(mode) ((mode) & QCE_MODE_CCM_RFC4309) #define IS_ENCRYPT(dir) (dir & QCE_ENCRYPT) #define IS_DECRYPT(dir) (dir & QCE_DECRYPT) From patchwork Thu Feb 25 18:27:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387235 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp545331jap; Thu, 25 Feb 2021 10:30:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJyBo3Mq+SNiGCmu8By4QatwT8kWB7IiWyBYjxq/+TvknvWtjKM8NZJUy16a097d/jhHFeNj X-Received: by 2002:a17:906:b6cc:: with SMTP id ec12mr3889073ejb.520.1614277807879; Thu, 25 Feb 2021 10:30:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277807; cv=none; d=google.com; s=arc-20160816; b=tcqcZ11wJc7zzq/nSUmsHMVOzFhkJnuDyHxPcfg1e/arBJ1RQQByXfDIYIcpH+WkEj 2QhHukvtR+Xjc9cpWwb5YMsCtjr11shvTku25TGl87ATHWvmEpv2tGilttr3nO7hYI8R yTqzsQQasDE0laRcOzeXkkLZTetfT2m0fG/gYxozBIdk/Pf7KpaSkPJTeiirBhQ0Mfct uiYwsmvF42/S/fRnNo76nwU3aEWU9XQvd9vnPPCNVxCb9avP5Mhg5zlv5rS5XTU7uqUo Aoof+BUzvsT5Esr+Syv1m/X/cuASmof5DNokd8K4DYTGi42FP4qQ7MdKuVnFcy3Nh962 zIEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ETgBQLYg+/VWvm61ckCcRJ/qjnexOrujXO/Zc4gAByk=; b=BQ/U/TtGtdL1Qmyndm7860PvgtbG37yeqk1mfvx9I62B4J/thj25uYeGKjYcF1j8xR VEX+H9yHesQzOyvSEq8/I7+Wr1K6Pz8deL+ZX9x3Z+eCC1i3HHb2wwn89CddKldtstJ8 12kb68JSwvf5Wh6AdV9CbyOx9JHxVuIB0qXQH3vKZOvdSALat0IRXQnsaHF0oKRMGg3Z zVhXXtodNmaJNCuJUU2IHFWSaUeUkrqone5VOo97chFvkTOJozt2Eabpg4+/Crwt9McM 0AHmIDBsnn1LKwgyt3vnOWqBeIQ7R0mE/1rdu+S9z4323ktthH9jkvwRBhNcjP0AZ/n1 OO5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QOtK3Py0; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si445986edv.24.2021.02.25.10.30.07; Thu, 25 Feb 2021 10:30:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QOtK3Py0; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233232AbhBYS3G (ORCPT + 2 others); Thu, 25 Feb 2021 13:29:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232906AbhBYS2j (ORCPT ); Thu, 25 Feb 2021 13:28:39 -0500 Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03905C061794 for ; Thu, 25 Feb 2021 10:27:22 -0800 (PST) Received: by mail-qk1-x729.google.com with SMTP id x124so6614024qkc.1 for ; Thu, 25 Feb 2021 10:27:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ETgBQLYg+/VWvm61ckCcRJ/qjnexOrujXO/Zc4gAByk=; b=QOtK3Py0wsHpsrn2/gcZiheoI2WxeORK7ySqUv+CVosazkeEuZGTt1atjv0gHFYOY1 1phUdWb5oh06h6JQ/gjqu1fgpWo58evmWknnMtUDUGhc2xx6jcpBCRvbeAafGkLPqnEW TYpcGzd0IGQ20EYy0zeZdSoB2Nogy5ZD8bjnGf5bzU2lXiX2PT4oNBgaPdUBaFX3P9HC kaB5PfTPMG3qJKsg64FZDfMbwLA3T61ThSOexIkAOVE+K9kDoj7nT0M3XqRQ2zQXCcsz Am+uxXgcNR0+JIgYWKj7Vv3BIX4AbH9jNCcgfjs49qC63HLPeuvH8RLxuVxIi+Fh3vvN m7zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ETgBQLYg+/VWvm61ckCcRJ/qjnexOrujXO/Zc4gAByk=; b=Os1yBj2Bqg/WglDUXCtihu5+T1jKYpezeK3f3LaqMK0dZHQ/MS1iD7VOBvW/4gQeg0 RSMCF2cRBQr4RxS/ACUMDaeG3gWRD/wjCzoePUXuwde6mo6N5FlWaVNCeZkK4M/MlEqy B1CJn1tf29DF0Ggt/DSBIZNVaZq/Q7nSZJMTyuXEj919j/EcpF0B/TUBBMUi3FwZw21g o7BiB0wBP7rWcnh7SYJXxmB5IlM+UQ4P4wkIFt3GFkF3P+9MmwA6ruzLmyb8nLoN9YH7 XvLbK/wFTEuMvlfz9PVxjd1QIM13fzsCWUDbUqZLCCDPii44h5sFXT2lBqle4fs/3T52 23dQ== X-Gm-Message-State: AOAM5332gD6CtrwwWpMTuWB8lst2ONHXsZIQUd5ccOSGTMwVC/RIdKBB UT8/fZZDooCPodxNqSKZLSBEbA== X-Received: by 2002:a05:620a:1315:: with SMTP id o21mr3964110qkj.3.1614277641049; Thu, 25 Feb 2021 10:27:21 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:20 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/7] crypto: qce: Add support for AEAD algorithms Date: Thu, 25 Feb 2021 13:27:13 -0500 Message-Id: <20210225182716.1402449-5-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce support to enable following algorithms in Qualcomm Crypto Engine. - authenc(hmac(sha1),cbc(des)) - authenc(hmac(sha1),cbc(des3_ede)) - authenc(hmac(sha256),cbc(des)) - authenc(hmac(sha256),cbc(des3_ede)) - authenc(hmac(sha256),cbc(aes)) - ccm(aes) - rfc4309(ccm(aes)) Signed-off-by: Thara Gopinath --- drivers/crypto/Kconfig | 15 + drivers/crypto/qce/Makefile | 1 + drivers/crypto/qce/aead.c | 779 ++++++++++++++++++++++++++++++++++++ drivers/crypto/qce/aead.h | 53 +++ drivers/crypto/qce/common.h | 2 + drivers/crypto/qce/core.c | 4 + 6 files changed, 854 insertions(+) create mode 100644 drivers/crypto/qce/aead.c create mode 100644 drivers/crypto/qce/aead.h -- 2.25.1 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index e535f28a8028..8caf296acda4 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -645,6 +645,12 @@ config CRYPTO_DEV_QCE_SHA select CRYPTO_SHA1 select CRYPTO_SHA256 +config CRYPTO_DEV_QCE_AEAD + bool + depends on CRYPTO_DEV_QCE + select CRYPTO_AUTHENC + select CRYPTO_LIB_DES + choice prompt "Algorithms enabled for QCE acceleration" default CRYPTO_DEV_QCE_ENABLE_ALL @@ -665,6 +671,7 @@ choice bool "All supported algorithms" select CRYPTO_DEV_QCE_SKCIPHER select CRYPTO_DEV_QCE_SHA + select CRYPTO_DEV_QCE_AEAD help Enable all supported algorithms: - AES (CBC, CTR, ECB, XTS) @@ -690,6 +697,14 @@ choice - SHA1, HMAC-SHA1 - SHA256, HMAC-SHA256 + config CRYPTO_DEV_QCE_ENABLE_AEAD + bool "AEAD algorithms only" + select CRYPTO_DEV_QCE_AEAD + help + Enable AEAD algorithms only: + - authenc() + - ccm(aes) + - rfc4309(ccm(aes)) endchoice config CRYPTO_DEV_QCE_SW_MAX_LEN diff --git a/drivers/crypto/qce/Makefile b/drivers/crypto/qce/Makefile index 14ade8a7d664..2cf8984e1b85 100644 --- a/drivers/crypto/qce/Makefile +++ b/drivers/crypto/qce/Makefile @@ -6,3 +6,4 @@ qcrypto-objs := core.o \ qcrypto-$(CONFIG_CRYPTO_DEV_QCE_SHA) += sha.o qcrypto-$(CONFIG_CRYPTO_DEV_QCE_SKCIPHER) += skcipher.o +qcrypto-$(CONFIG_CRYPTO_DEV_QCE_AEAD) += aead.o diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c new file mode 100644 index 000000000000..b594c4bb2640 --- /dev/null +++ b/drivers/crypto/qce/aead.c @@ -0,0 +1,779 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Copyright (C) 2021, Linaro Limited. All rights reserved. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "aead.h" + +#define CCM_NONCE_ADATA_SHIFT 6 +#define CCM_NONCE_AUTHSIZE_SHIFT 3 +#define MAX_CCM_ADATA_HEADER_LEN 6 + +static LIST_HEAD(aead_algs); + +static void qce_aead_done(void *data) +{ + struct crypto_async_request *async_req = data; + struct aead_request *req = aead_request_cast(async_req); + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + struct qce_result_dump *result_buf = qce->dma.result_buf; + enum dma_data_direction dir_src, dir_dst; + bool diff_dst; + int error; + u32 status; + unsigned int totallen; + unsigned char tag[SHA256_DIGEST_SIZE] = {0}; + int ret = 0; + + diff_dst = (req->src != req->dst) ? true : false; + dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; + dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL; + + error = qce_dma_terminate_all(&qce->dma); + if (error) + dev_dbg(qce->dev, "aead dma termination error (%d)\n", + error); + if (diff_dst) + dma_unmap_sg(qce->dev, rctx->src_sg, rctx->src_nents, dir_src); + + dma_unmap_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst); + + if (IS_CCM(rctx->flags)) { + if (req->assoclen) { + sg_free_table(&rctx->src_tbl); + if (diff_dst) + sg_free_table(&rctx->dst_tbl); + } else { + if (!(IS_DECRYPT(rctx->flags) && !diff_dst)) + sg_free_table(&rctx->dst_tbl); + } + } else { + sg_free_table(&rctx->dst_tbl); + } + + error = qce_check_status(qce, &status); + if (error < 0 && (error != -EBADMSG)) + dev_err(qce->dev, "aead operation error (%x)\n", status); + + if (IS_ENCRYPT(rctx->flags)) { + totallen = req->cryptlen + req->assoclen; + if (IS_CCM(rctx->flags)) + scatterwalk_map_and_copy(rctx->ccmresult_buf, req->dst, + totallen, ctx->authsize, 1); + else + scatterwalk_map_and_copy(result_buf->auth_iv, req->dst, + totallen, ctx->authsize, 1); + + } else if (!IS_CCM(rctx->flags)) { + totallen = req->cryptlen + req->assoclen - ctx->authsize; + scatterwalk_map_and_copy(tag, req->src, totallen, ctx->authsize, 0); + ret = memcmp(result_buf->auth_iv, tag, ctx->authsize); + if (ret) { + pr_err("Bad message error\n"); + error = -EBADMSG; + } + } + + qce->async_req_done(qce, error); +} + +static struct scatterlist * +qce_aead_prepare_result_buf(struct sg_table *tbl, struct aead_request *req) +{ + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + + sg_init_one(&rctx->result_sg, qce->dma.result_buf, QCE_RESULT_BUF_SZ); + return qce_sgtable_add(tbl, &rctx->result_sg, QCE_RESULT_BUF_SZ); +} + +static struct scatterlist * +qce_aead_prepare_ccm_result_buf(struct sg_table *tbl, struct aead_request *req) +{ + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + + sg_init_one(&rctx->result_sg, rctx->ccmresult_buf, QCE_BAM_BURST_SIZE); + return qce_sgtable_add(tbl, &rctx->result_sg, QCE_BAM_BURST_SIZE); +} + +static struct scatterlist * +qce_aead_prepare_dst_buf(struct aead_request *req) +{ + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + struct scatterlist *sg, *msg_sg, __sg[2]; + gfp_t gfp; + unsigned int assoclen = req->assoclen; + unsigned int totallen; + int ret; + + totallen = rctx->cryptlen + assoclen; + rctx->dst_nents = sg_nents_for_len(req->dst, totallen); + if (rctx->dst_nents < 0) { + dev_err(qce->dev, "Invalid numbers of dst SG.\n"); + return ERR_PTR(-EINVAL); + } + if (IS_CCM(rctx->flags)) + rctx->dst_nents += 2; + else + rctx->dst_nents += 1; + + gfp = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC; + ret = sg_alloc_table(&rctx->dst_tbl, rctx->dst_nents, gfp); + if (ret) + return ERR_PTR(ret); + + if (IS_CCM(rctx->flags) && assoclen) { + /* Get the dst buffer */ + msg_sg = scatterwalk_ffwd(__sg, req->dst, assoclen); + + sg = qce_sgtable_add(&rctx->dst_tbl, &rctx->adata_sg, + rctx->assoclen); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto dst_tbl_free; + } + /* dst buffer */ + sg = qce_sgtable_add(&rctx->dst_tbl, msg_sg, rctx->cryptlen); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto dst_tbl_free; + } + totallen = rctx->cryptlen + rctx->assoclen; + } else { + if (totallen) { + sg = qce_sgtable_add(&rctx->dst_tbl, req->dst, totallen); + if (IS_ERR(sg)) + goto dst_tbl_free; + } + } + if (IS_CCM(rctx->flags)) + sg = qce_aead_prepare_ccm_result_buf(&rctx->dst_tbl, req); + else + sg = qce_aead_prepare_result_buf(&rctx->dst_tbl, req); + + if (IS_ERR(sg)) + goto dst_tbl_free; + + sg_mark_end(sg); + rctx->dst_sg = rctx->dst_tbl.sgl; + rctx->dst_nents = sg_nents_for_len(rctx->dst_sg, totallen) + 1; + + return sg; + +dst_tbl_free: + sg_free_table(&rctx->dst_tbl); + return sg; +} + +static int +qce_aead_ccm_prepare_buf_assoclen(struct aead_request *req) +{ + struct scatterlist *sg, *msg_sg, __sg[2]; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + unsigned int assoclen = rctx->assoclen; + unsigned int adata_header_len, cryptlen, totallen; + gfp_t gfp; + bool diff_dst; + int ret; + + if (IS_DECRYPT(rctx->flags)) + cryptlen = rctx->cryptlen + ctx->authsize; + else + cryptlen = rctx->cryptlen; + totallen = cryptlen + req->assoclen; + + /* Get the msg */ + msg_sg = scatterwalk_ffwd(__sg, req->src, req->assoclen); + + rctx->adata = kzalloc((ALIGN(assoclen, 16) + MAX_CCM_ADATA_HEADER_LEN) * + sizeof(unsigned char), GFP_ATOMIC); + if (!rctx->adata) + return -ENOMEM; + + /* + * Format associated data (RFC3610 and NIST 800-38C) + * Even though specification allows for AAD to be up to 2^64 - 1 bytes, + * the assoclen field in aead_request is unsigned int and thus limits + * the AAD to be up to 2^32 - 1 bytes. So we handle only two scenarios + * while forming the header for AAD. + */ + if (assoclen < 0xff00) { + adata_header_len = 2; + *(__be16 *)rctx->adata = cpu_to_be16(assoclen); + } else { + adata_header_len = 6; + *(__be16 *)rctx->adata = cpu_to_be16(0xfffe); + *(__be32 *)(rctx->adata + 2) = cpu_to_be32(assoclen); + } + + /* Copy the associated data */ + if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, assoclen), + rctx->adata + adata_header_len, + assoclen) != assoclen) + return -EINVAL; + + /* Pad associated data to block size */ + rctx->assoclen = ALIGN(assoclen + adata_header_len, 16); + + diff_dst = (req->src != req->dst) ? true : false; + + if (diff_dst) + rctx->src_nents = sg_nents_for_len(req->src, totallen) + 1; + else + rctx->src_nents = sg_nents_for_len(req->src, totallen) + 2; + + gfp = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; + ret = sg_alloc_table(&rctx->src_tbl, rctx->src_nents, gfp); + if (ret) + return ret; + + /* Associated Data */ + sg_init_one(&rctx->adata_sg, rctx->adata, rctx->assoclen); + sg = qce_sgtable_add(&rctx->src_tbl, &rctx->adata_sg, + rctx->assoclen); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto err_free; + } + /* src msg */ + sg = qce_sgtable_add(&rctx->src_tbl, msg_sg, cryptlen); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto err_free; + } + if (!diff_dst) { + /* + * When src and dst buffers are same, there is already space + * in the buffer for padded 0's which is output in lieu of + * the MAC that is input + */ + if (!IS_DECRYPT(rctx->flags)) { + sg = qce_aead_prepare_ccm_result_buf(&rctx->src_tbl, req); + if (IS_ERR(sg)) { + ret = PTR_ERR(sg); + goto err_free; + } + } + } + sg_mark_end(sg); + rctx->src_sg = rctx->src_tbl.sgl; + totallen = cryptlen + rctx->assoclen; + rctx->src_nents = sg_nents_for_len(rctx->src_sg, totallen); + + if (diff_dst) { + sg = qce_aead_prepare_dst_buf(req); + if (IS_ERR(sg)) + goto err_free; + } else { + if (IS_ENCRYPT(rctx->flags)) + rctx->dst_nents = rctx->src_nents + 1; + else + rctx->dst_nents = rctx->src_nents; + rctx->dst_sg = rctx->src_sg; + } + + return 0; +err_free: + sg_free_table(&rctx->src_tbl); + return ret; +} + +static int qce_aead_prepare_buf(struct aead_request *req) +{ + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + struct scatterlist *sg; + bool diff_dst = (req->src != req->dst) ? true : false; + unsigned int totallen; + + totallen = rctx->cryptlen + rctx->assoclen; + + sg = qce_aead_prepare_dst_buf(req); + if (IS_ERR(sg)) + return PTR_ERR(sg); + if (diff_dst) { + rctx->src_nents = sg_nents_for_len(req->src, totallen); + if (rctx->src_nents < 0) { + dev_err(qce->dev, "Invalid numbers of src SG.\n"); + return -EINVAL; + } + rctx->src_sg = req->src; + } else { + rctx->src_nents = rctx->dst_nents - 1; + rctx->src_sg = rctx->dst_sg; + } + return 0; +} + +static int qce_aead_ccm_prepare_buf(struct aead_request *req) +{ + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct scatterlist *sg; + bool diff_dst = (req->src != req->dst) ? true : false; + unsigned int cryptlen; + + if (rctx->assoclen) + return qce_aead_ccm_prepare_buf_assoclen(req); + + if (IS_ENCRYPT(rctx->flags)) + return qce_aead_prepare_buf(req); + + cryptlen = rctx->cryptlen + ctx->authsize; + if (diff_dst) { + rctx->src_nents = sg_nents_for_len(req->src, cryptlen); + rctx->src_sg = req->src; + sg = qce_aead_prepare_dst_buf(req); + if (IS_ERR(sg)) + return PTR_ERR(sg); + } else { + rctx->src_nents = sg_nents_for_len(req->src, cryptlen); + rctx->src_sg = req->src; + rctx->dst_nents = rctx->src_nents; + rctx->dst_sg = rctx->src_sg; + } + + return 0; +} + +static int qce_aead_create_ccm_nonce(struct qce_aead_reqctx *rctx, struct qce_aead_ctx *ctx) +{ + unsigned int msglen_size; + u8 msg_len[4]; + int i; + + if (!rctx || !rctx->iv) + return -EINVAL; + + msglen_size = rctx->iv[0] + 1; + + if (msglen_size > 4) + msglen_size = 4; + + memcpy(&msg_len[0], &rctx->cryptlen, 4); + + memcpy(&rctx->ccm_nonce[0], rctx->iv, rctx->ivsize); + if (rctx->assoclen) + rctx->ccm_nonce[0] |= 1 << CCM_NONCE_ADATA_SHIFT; + rctx->ccm_nonce[0] |= ((ctx->authsize - 2) / 2) << + CCM_NONCE_AUTHSIZE_SHIFT; + for (i = 0; i < msglen_size; i++) + rctx->ccm_nonce[QCE_MAX_NONCE - i - 1] = msg_len[i]; + + return 0; +} + +static int +qce_aead_async_req_handle(struct crypto_async_request *async_req) +{ + struct aead_request *req = aead_request_cast(async_req); + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + enum dma_data_direction dir_src, dir_dst; + unsigned int totallen; + bool diff_dst; + int ret; + + if (IS_CCM_RFC4309(rctx->flags)) { + memset(rctx->ccm_rfc4309_iv, 0, QCE_MAX_IV_SIZE); + rctx->ccm_rfc4309_iv[0] = 3; + memcpy(&rctx->ccm_rfc4309_iv[1], ctx->ccm4309_salt, QCE_CCM4309_SALT_SIZE); + memcpy(&rctx->ccm_rfc4309_iv[4], req->iv, 8); + rctx->iv = rctx->ccm_rfc4309_iv; + rctx->ivsize = AES_BLOCK_SIZE; + } else { + rctx->iv = req->iv; + rctx->ivsize = crypto_aead_ivsize(tfm); + } + if (IS_CCM_RFC4309(rctx->flags)) + rctx->assoclen = req->assoclen - 8; + else + rctx->assoclen = req->assoclen; + + totallen = rctx->cryptlen + rctx->assoclen; + + diff_dst = (req->src != req->dst) ? true : false; + dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; + dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL; + + if (IS_CCM(rctx->flags)) { + ret = qce_aead_create_ccm_nonce(rctx, ctx); + if (ret) + return ret; + } + if (IS_CCM(rctx->flags)) + ret = qce_aead_ccm_prepare_buf(req); + else + ret = qce_aead_prepare_buf(req); + + if (ret) + return ret; + ret = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst); + if (ret < 0) + goto error_free; + + if (diff_dst) { + ret = dma_map_sg(qce->dev, rctx->src_sg, rctx->src_nents, dir_src); + if (ret < 0) + goto error_unmap_dst; + } + + ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, rctx->src_nents, + rctx->dst_sg, rctx->dst_nents, + qce_aead_done, async_req); + if (ret) + goto error_unmap_src; + + qce_dma_issue_pending(&qce->dma); + + ret = qce_start(async_req, tmpl->crypto_alg_type); + if (ret) + goto error_terminate; + + return 0; + +error_terminate: + qce_dma_terminate_all(&qce->dma); +error_unmap_src: + if (diff_dst) + dma_unmap_sg(qce->dev, req->src, rctx->src_nents, dir_src); +error_unmap_dst: + dma_unmap_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst); +error_free: + if (IS_CCM(rctx->flags) && rctx->assoclen) { + sg_free_table(&rctx->src_tbl); + if (diff_dst) + sg_free_table(&rctx->dst_tbl); + } else { + sg_free_table(&rctx->dst_tbl); + } + return ret; +} + +static int qce_aead_crypt(struct aead_request *req, int encrypt) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct qce_alg_template *tmpl = to_aead_tmpl(tfm); + unsigned int blocksize = crypto_aead_blocksize(tfm); + + rctx->flags = tmpl->alg_flags; + rctx->flags |= encrypt ? QCE_ENCRYPT : QCE_DECRYPT; + + if (encrypt) + rctx->cryptlen = req->cryptlen; + else + rctx->cryptlen = req->cryptlen - ctx->authsize; + + /* CE does not handle 0 length messages */ + if (!rctx->cryptlen) { + if (!(IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))) + return -EINVAL; + } + + /* + * CBC algorithms require message lengths to be + * multiples of block size. + */ + if (IS_CBC(rctx->flags) && !IS_ALIGNED(rctx->cryptlen, blocksize)) + return -EINVAL; + + /* RFC4309 supported AAD size 16 bytes/20 bytes */ + if (IS_CCM_RFC4309(rctx->flags)) + if (crypto_ipsec_check_assoclen(req->assoclen)) + return -EINVAL; + + return tmpl->qce->async_req_enqueue(tmpl->qce, &req->base); +} + +static int qce_aead_encrypt(struct aead_request *req) +{ + return qce_aead_crypt(req, 1); +} + +static int qce_aead_decrypt(struct aead_request *req) +{ + return qce_aead_crypt(req, 0); +} + +static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + unsigned long flags = to_aead_tmpl(tfm)->alg_flags; + + if (IS_CCM_RFC4309(flags)) { + if (keylen < QCE_CCM4309_SALT_SIZE) + return -EINVAL; + keylen -= QCE_CCM4309_SALT_SIZE; + memcpy(ctx->ccm4309_salt, key + keylen, QCE_CCM4309_SALT_SIZE); + } + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) + return -EINVAL; + + ctx->enc_keylen = keylen; + ctx->auth_keylen = keylen; + + memcpy(ctx->enc_key, key, keylen); + memcpy(ctx->auth_key, key, keylen); + + return 0; +} + +static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct crypto_authenc_keys authenc_keys; + unsigned long flags = to_aead_tmpl(tfm)->alg_flags; + u32 _key[6]; + int err; + + err = crypto_authenc_extractkeys(&authenc_keys, key, keylen); + if (err) + return err; + + if (authenc_keys.enckeylen > QCE_MAX_KEY_SIZE || + authenc_keys.authkeylen > QCE_MAX_KEY_SIZE) + return -EINVAL; + + if (IS_DES(flags)) { + err = verify_aead_des_key(tfm, authenc_keys.enckey, authenc_keys.enckeylen); + if (err) + return err; + } else if (IS_3DES(flags)) { + err = verify_aead_des3_key(tfm, authenc_keys.enckey, authenc_keys.enckeylen); + if (err) + return err; + /* + * The crypto engine does not support any two keys + * being the same for triple des algorithms. The + * verify_skcipher_des3_key does not check for all the + * below conditions. Return -EINVAL in case any two keys + * are the same. Revisit to see if a fallback cipher + * is needed to handle this condition. + */ + memcpy(_key, authenc_keys.enckey, DES3_EDE_KEY_SIZE); + if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) || + !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) || + !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5]))) + return -EINVAL; + } else if (IS_AES(flags)) { + /* No random key sizes */ + if (authenc_keys.enckeylen != AES_KEYSIZE_128 && + authenc_keys.enckeylen != AES_KEYSIZE_256) + return -EINVAL; + } + + ctx->enc_keylen = authenc_keys.enckeylen; + ctx->auth_keylen = authenc_keys.authkeylen; + + memcpy(ctx->enc_key, authenc_keys.enckey, authenc_keys.enckeylen); + + memset(ctx->auth_key, 0, sizeof(ctx->auth_key)); + memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen); + + return 0; +} + +static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize) +{ + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + unsigned long flags = to_aead_tmpl(tfm)->alg_flags; + + if (IS_CCM(flags)) { + if (authsize < 4 || authsize > 16 || authsize % 2) + return -EINVAL; + if (IS_CCM_RFC4309(flags) && (authsize < 8 || authsize % 4)) + return -EINVAL; + } + ctx->authsize = authsize; + return 0; +} + +static int qce_aead_init(struct crypto_aead *tfm) +{ + crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx)); + return 0; +} + +struct qce_aead_def { + unsigned long flags; + const char *name; + const char *drv_name; + unsigned int blocksize; + unsigned int chunksize; + unsigned int ivsize; + unsigned int maxauthsize; +}; + +static const struct qce_aead_def aead_def[] = { + { + .flags = QCE_ALG_DES | QCE_MODE_CBC | QCE_HASH_SHA1_HMAC, + .name = "authenc(hmac(sha1),cbc(des))", + .drv_name = "authenc-hmac-sha1-cbc-des-qce", + .blocksize = DES_BLOCK_SIZE, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + { + .flags = QCE_ALG_3DES | QCE_MODE_CBC | QCE_HASH_SHA1_HMAC, + .name = "authenc(hmac(sha1),cbc(des3_ede))", + .drv_name = "authenc-hmac-sha1-cbc-3des-qce", + .blocksize = DES3_EDE_BLOCK_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + { + .flags = QCE_ALG_DES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC, + .name = "authenc(hmac(sha256),cbc(des))", + .drv_name = "authenc-hmac-sha256-cbc-des-qce", + .blocksize = DES_BLOCK_SIZE, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + { + .flags = QCE_ALG_3DES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC, + .name = "authenc(hmac(sha256),cbc(des3_ede))", + .drv_name = "authenc-hmac-sha256-cbc-3des-qce", + .blocksize = DES3_EDE_BLOCK_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + { + .flags = QCE_ALG_AES | QCE_MODE_CBC | QCE_HASH_SHA256_HMAC, + .name = "authenc(hmac(sha256),cbc(aes))", + .drv_name = "authenc-hmac-sha256-cbc-aes-qce", + .blocksize = AES_BLOCK_SIZE, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + { + .flags = QCE_ALG_AES | QCE_MODE_CCM, + .name = "ccm(aes)", + .drv_name = "ccm-aes-qce", + .blocksize = 1, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + { + .flags = QCE_ALG_AES | QCE_MODE_CCM | QCE_MODE_CCM_RFC4309, + .name = "rfc4309(ccm(aes))", + .drv_name = "rfc4309-ccm-aes-qce", + .blocksize = 1, + .ivsize = 8, + .maxauthsize = AES_BLOCK_SIZE, + }, +}; + +static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_device *qce) +{ + struct qce_alg_template *tmpl; + struct aead_alg *alg; + int ret; + + tmpl = kzalloc(sizeof(*tmpl), GFP_KERNEL); + if (!tmpl) + return -ENOMEM; + + alg = &tmpl->alg.aead; + + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + def->drv_name); + + alg->base.cra_blocksize = def->blocksize; + alg->chunksize = def->chunksize; + alg->ivsize = def->ivsize; + alg->maxauthsize = def->maxauthsize; + if (IS_CCM(def->flags)) + alg->setkey = qce_aead_ccm_setkey; + else + alg->setkey = qce_aead_setkey; + alg->setauthsize = qce_aead_setauthsize; + alg->encrypt = qce_aead_encrypt; + alg->decrypt = qce_aead_decrypt; + alg->init = qce_aead_init; + + alg->base.cra_priority = 300; + alg->base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_ALLOCATES_MEMORY | + CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_ctxsize = sizeof(struct qce_aead_ctx); + alg->base.cra_alignmask = 0; + alg->base.cra_module = THIS_MODULE; + + INIT_LIST_HEAD(&tmpl->entry); + tmpl->crypto_alg_type = CRYPTO_ALG_TYPE_AEAD; + tmpl->alg_flags = def->flags; + tmpl->qce = qce; + + ret = crypto_register_aead(alg); + if (ret) { + kfree(tmpl); + dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name); + return ret; + } + + list_add_tail(&tmpl->entry, &aead_algs); + dev_dbg(qce->dev, "%s is registered\n", alg->base.cra_name); + return 0; +} + +static void qce_aead_unregister(struct qce_device *qce) +{ + struct qce_alg_template *tmpl, *n; + + list_for_each_entry_safe(tmpl, n, &aead_algs, entry) { + crypto_unregister_aead(&tmpl->alg.aead); + list_del(&tmpl->entry); + kfree(tmpl); + } +} + +static int qce_aead_register(struct qce_device *qce) +{ + int ret, i; + + for (i = 0; i < ARRAY_SIZE(aead_def); i++) { + ret = qce_aead_register_one(&aead_def[i], qce); + if (ret) + goto err; + } + + return 0; +err: + qce_aead_unregister(qce); + return ret; +} + +const struct qce_algo_ops aead_ops = { + .type = CRYPTO_ALG_TYPE_AEAD, + .register_algs = qce_aead_register, + .unregister_algs = qce_aead_unregister, + .async_req_handle = qce_aead_async_req_handle, +}; diff --git a/drivers/crypto/qce/aead.h b/drivers/crypto/qce/aead.h new file mode 100644 index 000000000000..3d1f2039930b --- /dev/null +++ b/drivers/crypto/qce/aead.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2021, Linaro Limited. All rights reserved. + */ + +#ifndef _AEAD_H_ +#define _AEAD_H_ + +#include "common.h" +#include "core.h" + +#define QCE_MAX_KEY_SIZE 64 +#define QCE_CCM4309_SALT_SIZE 3 + +struct qce_aead_ctx { + u8 enc_key[QCE_MAX_KEY_SIZE]; + u8 auth_key[QCE_MAX_KEY_SIZE]; + u8 ccm4309_salt[QCE_CCM4309_SALT_SIZE]; + unsigned int enc_keylen; + unsigned int auth_keylen; + unsigned int authsize; +}; + +struct qce_aead_reqctx { + unsigned long flags; + u8 *iv; + unsigned int ivsize; + int src_nents; + int dst_nents; + struct scatterlist result_sg; + struct scatterlist adata_sg; + struct sg_table dst_tbl; + struct sg_table src_tbl; + struct scatterlist *dst_sg; + struct scatterlist *src_sg; + unsigned int cryptlen; + unsigned int assoclen; + unsigned char *adata; + u8 ccm_nonce[QCE_MAX_NONCE]; + u8 ccmresult_buf[QCE_BAM_BURST_SIZE]; + u8 ccm_rfc4309_iv[QCE_MAX_IV_SIZE]; +}; + +static inline struct qce_alg_template *to_aead_tmpl(struct crypto_aead *tfm) +{ + struct aead_alg *alg = crypto_aead_alg(tfm); + + return container_of(alg, struct qce_alg_template, alg.aead); +} + +extern const struct qce_algo_ops aead_ops; + +#endif /* _AEAD_H_ */ diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h index 3ffe719b79e4..ba7ff6b68825 100644 --- a/drivers/crypto/qce/common.h +++ b/drivers/crypto/qce/common.h @@ -11,6 +11,7 @@ #include #include #include +#include /* xts du size */ #define QCE_SECTOR_SIZE 512 @@ -88,6 +89,7 @@ struct qce_alg_template { union { struct skcipher_alg skcipher; struct ahash_alg ahash; + struct aead_alg aead; } alg; struct qce_device *qce; const u8 *hash_zero; diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c index 80b75085c265..d3780be44a76 100644 --- a/drivers/crypto/qce/core.c +++ b/drivers/crypto/qce/core.c @@ -17,6 +17,7 @@ #include "core.h" #include "cipher.h" #include "sha.h" +#include "aead.h" #define QCE_MAJOR_VERSION5 0x05 #define QCE_QUEUE_LENGTH 1 @@ -28,6 +29,9 @@ static const struct qce_algo_ops *qce_ops[] = { #ifdef CONFIG_CRYPTO_DEV_QCE_SHA &ahash_ops, #endif +#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD + &aead_ops, +#endif }; static void qce_unregister_algs(struct qce_device *qce) From patchwork Thu Feb 25 18:27:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387236 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp545353jap; Thu, 25 Feb 2021 10:30:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJy8NAaEmnRiHHyoyAsbKk6u6eDEsn2So36q/57D3uh7Cyzx6qAFpbiTSCD43aVYRqMk/Gni X-Received: by 2002:a17:906:380c:: with SMTP id v12mr3906979ejc.65.1614277809302; Thu, 25 Feb 2021 10:30:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277809; cv=none; d=google.com; s=arc-20160816; b=bG1Q+AVSzhQS5C0L3sr14YNaQCEydE3lfrfhe2QWkTPaiQCKKBhsSYCUXGEyloD0uw opXsn6yGhPW0AtLZy2r+LbhZPN6/U+Sd2a0d8LkzkWePrK4azraNn+90+dejYUoUdo15 M3xPShqS9c8jUEbiyQQZA/sGAvfxoTLSqaHWeKFWUNU+XcCZq0nsQYAWfkzRKQJ93OTz XetPZ3JemSfWUEijWISy4y05OAKkuBoYsimLITk484TllyRYRdod2Pxh6qRAjYY04kP6 ilWLhZdWqt9JiJUdGsTyAEP2ZF+PnvOjkxUTHras1nyFVq1CIaOZsp8ODKer4+m/nRNH QqfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GAQ7mJYjum/vJpvXxz96AjXoLhaENBWrKzXJpZF6Wpc=; b=s1CsENiOf7QQVkZ8fKYAz+M/gp9bz3z75W4dUuXpy22NsZeb3jGGR7Otb7J3afi7Rl 7FUovohZKr8KOn4yy+S4x4UU++fA8IaNHWRLFRM6am1SR8GxNXaMWPRAFEfzPG+Ba+iV +V47OM6bK2lp2GvHu1LZmLBEbWQIfI326T7e5NodFX8BIGmvgwBoegFIL+5Hyly98Abz qu5/qF+WbZyquV1KTD1bdG9XQeGMMyXfafTuSUFOahGyLaAtv6i+hEnJYz4r4+w+Pgys 5t5688gBdbH/nfKcemWitmxxfgCoCzupQUtKz52Tu7VPtocv6KXTeLBpQka9rCrqfpSx 0rDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Xm9ESYbM; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si445986edv.24.2021.02.25.10.30.09; Thu, 25 Feb 2021 10:30:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Xm9ESYbM; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233313AbhBYS3i (ORCPT + 2 others); Thu, 25 Feb 2021 13:29:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233329AbhBYS3A (ORCPT ); Thu, 25 Feb 2021 13:29:00 -0500 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C5DDC0617AB for ; Thu, 25 Feb 2021 10:27:24 -0800 (PST) Received: by mail-qt1-x831.google.com with SMTP id b3so4796207qtj.10 for ; Thu, 25 Feb 2021 10:27:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GAQ7mJYjum/vJpvXxz96AjXoLhaENBWrKzXJpZF6Wpc=; b=Xm9ESYbMwNOE9d/S9JmjWwMQHdzA3z/nIxKsXy8gYGkLDEspN18hoo9Gg+YLcR7F5I 9RTdXgBd+E1pV26DsDLCd0Y+Iad5pbS/lOktAImnzW1/rihvkwAWN8NBUu7BYevRSIdk 5rUES/n/jujaK1YKrTTrDtmfXS5KM+Xdz4Gc5HUrB5OK7RAaVtiSKn27rcmFgcjOJsP+ Z9Fd15Mh3XcCcgsl+ReyQ4VlmqbuvItEMQwxRixASWUQJEI2VOe0khPyGpqL7bQB7xby cxy49D6ejeI4lft6DCZEGc1c8DMZ8CW5C1VXgOlMXBwS8iKXf8cB8g2AGp8TNoroaVXv UJZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GAQ7mJYjum/vJpvXxz96AjXoLhaENBWrKzXJpZF6Wpc=; b=UXhqSjCy83WB3RPd4R4TEoO5B6aJZSA3wYXK4WfTKvA8b0jivrMCZtAj8J3iHGBZ1a 51kWWSNa3hjA/IckZ2bqfk7f24Ha4DW5woTAXFeQbeECAMR0uAoqOBAiCYsxmW3bixfV 2dlkfQA1Sw6jf068Hf8SIDTc5x5dXuJR6Fmu+tMtFBKCZRUPv4AXcYMrQzeR1LXN/Dn+ hKfMGQuVZIlPwauKI5FpcLjCTy5alF688Frt7eveg8DcurD9t2n5UIUCEtr22UAr0jo2 2Yl0a+pa4zEIRYuwjehOwOSubOUiKSzI8bWGZqDUJx8JJl5HyKl9ZqiRzXXhIYOvCdtW 3OXw== X-Gm-Message-State: AOAM5337C7k/UZWT7/uwN2wg9rlvyNFxy3/R0X7sE+bE+nRLTHcASFEs +glw/H4VSMwx0d0xw4f2N7eIvg== X-Received: by 2002:ac8:45d3:: with SMTP id e19mr3686016qto.232.1614277641834; Thu, 25 Feb 2021 10:27:21 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:21 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/7] crypto: qce: common: Clean up qce_auth_cfg Date: Thu, 25 Feb 2021 13:27:14 -0500 Message-Id: <20210225182716.1402449-6-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove various redundant checks in qce_auth_cfg. Also allow qce_auth_cfg to take auth_size as a parameter which is a required setting for ccm(aes) algorithms Signed-off-by: Thara Gopinath --- drivers/crypto/qce/common.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) -- 2.25.1 diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index 2485aa371d83..05a71c5ecf61 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -97,11 +97,11 @@ static inline void qce_crypto_go(struct qce_device *qce, bool result_dump) } #ifdef CONFIG_CRYPTO_DEV_QCE_SHA -static u32 qce_auth_cfg(unsigned long flags, u32 key_size) +static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size) { u32 cfg = 0; - if (IS_AES(flags) && (IS_CCM(flags) || IS_CMAC(flags))) + if (IS_CCM(flags) || IS_CMAC(flags)) cfg |= AUTH_ALG_AES << AUTH_ALG_SHIFT; else cfg |= AUTH_ALG_SHA << AUTH_ALG_SHIFT; @@ -119,15 +119,16 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size) cfg |= AUTH_SIZE_SHA256 << AUTH_SIZE_SHIFT; else if (IS_CMAC(flags)) cfg |= AUTH_SIZE_ENUM_16_BYTES << AUTH_SIZE_SHIFT; + else if (IS_CCM(flags)) + cfg |= (auth_size - 1) << AUTH_SIZE_SHIFT; if (IS_SHA1(flags) || IS_SHA256(flags)) cfg |= AUTH_MODE_HASH << AUTH_MODE_SHIFT; - else if (IS_SHA1_HMAC(flags) || IS_SHA256_HMAC(flags) || - IS_CBC(flags) || IS_CTR(flags)) + else if (IS_SHA1_HMAC(flags) || IS_SHA256_HMAC(flags)) cfg |= AUTH_MODE_HMAC << AUTH_MODE_SHIFT; - else if (IS_AES(flags) && IS_CCM(flags)) + else if (IS_CCM(flags)) cfg |= AUTH_MODE_CCM << AUTH_MODE_SHIFT; - else if (IS_AES(flags) && IS_CMAC(flags)) + else if (IS_CMAC(flags)) cfg |= AUTH_MODE_CMAC << AUTH_MODE_SHIFT; if (IS_SHA(flags) || IS_SHA_HMAC(flags)) @@ -136,10 +137,6 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size) if (IS_CCM(flags)) cfg |= QCE_MAX_NONCE_WORDS << AUTH_NONCE_NUM_WORDS_SHIFT; - if (IS_CBC(flags) || IS_CTR(flags) || IS_CCM(flags) || - IS_CMAC(flags)) - cfg |= BIT(AUTH_LAST_SHIFT) | BIT(AUTH_FIRST_SHIFT); - return cfg; } @@ -171,7 +168,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req) qce_clear_array(qce, REG_AUTH_KEY0, 16); qce_clear_array(qce, REG_AUTH_BYTECNT0, 4); - auth_cfg = qce_auth_cfg(rctx->flags, rctx->authklen); + auth_cfg = qce_auth_cfg(rctx->flags, rctx->authklen, digestsize); } if (IS_SHA_HMAC(rctx->flags) || IS_CMAC(rctx->flags)) { @@ -199,7 +196,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req) qce_write_array(qce, REG_AUTH_BYTECNT0, (u32 *)rctx->byte_count, 2); - auth_cfg = qce_auth_cfg(rctx->flags, 0); + auth_cfg = qce_auth_cfg(rctx->flags, 0, digestsize); if (rctx->last_blk) auth_cfg |= BIT(AUTH_LAST_SHIFT); From patchwork Thu Feb 25 18:27:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387234 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp545337jap; Thu, 25 Feb 2021 10:30:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJxgVJMilxmv/akYA9mQA8wvNkG3KxWH7is3CGoXF5SCWiu4x5gkvo59SQhqZqURBace3moA X-Received: by 2002:a17:906:f0d9:: with SMTP id dk25mr3842083ejb.276.1614277808219; Thu, 25 Feb 2021 10:30:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277808; cv=none; d=google.com; s=arc-20160816; b=w1FWScNAEJ1FbzFo4W0eecM9fCv9A5VoyRa0W7uPQL+PvwSKt2euaTgzzFp8QaHifE q4YNG/pFzy3qYRl9CSySZl5X8qXLnXDz9reACvVe9e7+gsAeb/cZ695wSPhoO2i1ywaE DlPBoljhKi2XCtQgS0Nrv58p7Su+8GS0hTstAQFUGW6zlNceDY1veazASp7bISpx4xd3 XrN3fMJZkyJgR5rTSOG2QUU1yTbMmMInxYcVa2j/wZFFYYY8t7VsBqd3pO2wcOdbhJeX lj+wE4z2HECW1lOhcoeNuQJQRMSETdRvR22in7lzx+VvQ+K0VCUPK77OseHgRZ+1JRjG 9SwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FPKL92vwEKq+QVHwwPqWKXZtqeTS7tcJq/+AN38Hg8Q=; b=WE7AyMBkrfSGd4ur42WOdVWjuW67dGucT2HbLfooTkfHYZItARCKSOb5KEue8nWCbd ICPGs74B7g55cqNA8MW/jnb/sxhr+Tz+3ebhZgtNCEvLXZmUn1j9VtqWotyr6ouCrEGm F5D5a7tpLgDIjtoZQU6g4GJdjnvtLqYT2vdP8XhJIQ+vxpke1bqiU4UnVwGph7+XubDQ 1XXmkNAGh3Eu88KAZ/lBpDElvHobuiGNgIPJFI7Ae/Xukglw1Q498LGJ27aZPW0cX1Yv OZTSak6NDgixNji3imKwECj5AkUPy9dj+9jl8hcZ7GBgaAjwVPTHtiWKaXAW0p9jOlrJ PyRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VRJhTkxd; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si445986edv.24.2021.02.25.10.30.08; Thu, 25 Feb 2021 10:30:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VRJhTkxd; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229845AbhBYS3K (ORCPT + 2 others); Thu, 25 Feb 2021 13:29:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233245AbhBYS2j (ORCPT ); Thu, 25 Feb 2021 13:28:39 -0500 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCCFFC0617A7 for ; Thu, 25 Feb 2021 10:27:23 -0800 (PST) Received: by mail-qt1-x82b.google.com with SMTP id o34so4790482qtd.11 for ; Thu, 25 Feb 2021 10:27:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FPKL92vwEKq+QVHwwPqWKXZtqeTS7tcJq/+AN38Hg8Q=; b=VRJhTkxdkQimNwQpMHwAD5JhIKgnDzNtHyD0ZzM44wPyPaS4a7b//4xcwpI56sh+Aw FCpCxkCD2yyTnQZ4zGL1fL/8cW8dABeqd9c1G8tobetcZ4U7SADTcq6Dis6fmfsM6G9L GaFd1W2bxNkP+btujudqTDPAwVFy7fSh37h2WYc7R95Khn95K2PB0CTNGViKg3bRaLUK TGGU/wOseB9lqpD3RZ2iVEamLspqVA6JpTiDjuryeOs97M68YGNELMHp7lgxU6xDZ7Z/ wMPuwi9DgQrwkZuOPKh4NJ/6WpPqH1fmHdOdAKfTsPZdw7lFj7qLpdZIuim4gQXHM/O8 L7Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FPKL92vwEKq+QVHwwPqWKXZtqeTS7tcJq/+AN38Hg8Q=; b=dQDOFv0d+IAK8J1FgXCwNdGW8RH33fSImCgy0n1inVsEz6OrVWie9RGB9rpn3W/2YQ Rs+Ye/oJhUhr4VDgeku6kGevj2ZEadaD9wUGq0T1mmEgsqtJVmw+y+vmnVI5YmciNVz+ ogXrAUR9kK6YNosofZJ0aceWxc1lYZR3x0fPIPofOjU7bmlc4Dz/mgBZzJCUSiQJMWLm uxb7bGVak9uWXxnzltSZSfC7JgyKpBth6DUQrWcl7GyzeD+ZN2f354L3PJR/Bb5a6wbI xx3JcafefjEBBquXl/YAhR9MMmc4GkyDFGHqS+APY+DaZ7tjehr6zB6Rr3shV/3nr5c7 v5rA== X-Gm-Message-State: AOAM533VG72knuMdlQcONRrwFEdFJlrpNKqwGmI1OiSKWy4mDnrHkPNS 0T+mPir+c9wPw3KLjBwY44ds0Q== X-Received: by 2002:ac8:7b9d:: with SMTP id p29mr3595210qtu.351.1614277642740; Thu, 25 Feb 2021 10:27:22 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:22 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/7] crypto: qce: common: Add support for AEAD algorithms Date: Thu, 25 Feb 2021 13:27:15 -0500 Message-Id: <20210225182716.1402449-7-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add register programming sequence for enabling AEAD algorithms on the Qualcomm crypto engine. Signed-off-by: Thara Gopinath --- drivers/crypto/qce/common.c | 155 +++++++++++++++++++++++++++++++++++- 1 file changed, 153 insertions(+), 2 deletions(-) -- 2.25.1 diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index 05a71c5ecf61..54d209cb0525 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -15,6 +15,16 @@ #include "core.h" #include "regs-v5.h" #include "sha.h" +#include "aead.h" + +static const u32 std_iv_sha1[SHA256_DIGEST_SIZE / sizeof(u32)] = { + SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, 0, 0, 0 +}; + +static const u32 std_iv_sha256[SHA256_DIGEST_SIZE / sizeof(u32)] = { + SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, + SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7 +}; static inline u32 qce_read(struct qce_device *qce, u32 offset) { @@ -96,7 +106,7 @@ static inline void qce_crypto_go(struct qce_device *qce, bool result_dump) qce_write(qce, REG_GOPROC, BIT(GO_SHIFT)); } -#ifdef CONFIG_CRYPTO_DEV_QCE_SHA +#if defined(CONFIG_CRYPTO_DEV_QCE_SHA) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD) static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size) { u32 cfg = 0; @@ -139,7 +149,9 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size) return cfg; } +#endif +#ifdef CONFIG_CRYPTO_DEV_QCE_SHA static int qce_setup_regs_ahash(struct crypto_async_request *async_req) { struct ahash_request *req = ahash_request_cast(async_req); @@ -225,7 +237,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req) } #endif -#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER +#if defined(CONFIG_CRYPTO_DEV_QCE_SKCIPHER) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD) static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size) { u32 cfg = 0; @@ -271,7 +283,9 @@ static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size) return cfg; } +#endif +#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER static void qce_xts_swapiv(__be32 *dst, const u8 *src, unsigned int ivsize) { u8 swap[QCE_AES_IV_LENGTH]; @@ -386,6 +400,139 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req) } #endif +#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD +static int qce_setup_regs_aead(struct crypto_async_request *async_req) +{ + struct aead_request *req = aead_request_cast(async_req); + struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); + struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); + struct qce_device *qce = tmpl->qce; + __be32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(__be32)] = {0}; + __be32 enciv[QCE_MAX_IV_SIZE / sizeof(__be32)] = {0}; + __be32 authkey[QCE_SHA_HMAC_KEY_SIZE / sizeof(__be32)] = {0}; + __be32 authiv[SHA256_DIGEST_SIZE / sizeof(__be32)] = {0}; + __be32 authnonce[QCE_MAX_NONCE / sizeof(__be32)] = {0}; + unsigned int enc_keylen = ctx->enc_keylen; + unsigned int auth_keylen = ctx->auth_keylen; + unsigned int enc_ivsize = rctx->ivsize; + unsigned int auth_ivsize; + unsigned int enckey_words, enciv_words; + unsigned int authkey_words, authiv_words, authnonce_words; + unsigned long flags = rctx->flags; + u32 encr_cfg = 0, auth_cfg = 0, config, totallen; + u32 *iv_last_word; + + qce_setup_config(qce); + + /* Write encryption key */ + qce_cpu_to_be32p_array(enckey, ctx->enc_key, enc_keylen); + enckey_words = enc_keylen / sizeof(u32); + qce_write_array(qce, REG_ENCR_KEY0, (u32 *)enckey, enckey_words); + + /* Write encryption iv */ + qce_cpu_to_be32p_array(enciv, rctx->iv, enc_ivsize); + enciv_words = enc_ivsize / sizeof(u32); + qce_write_array(qce, REG_CNTR0_IV0, (u32 *)enciv, enciv_words); + + if (IS_CCM(rctx->flags)) { + iv_last_word = (u32 *)&enciv[enciv_words - 1]; +// qce_write(qce, REG_CNTR3_IV3, enciv[enciv_words - 1] + 1); + qce_write(qce, REG_CNTR3_IV3, (*iv_last_word) + 1); + qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words); + qce_write(qce, REG_CNTR_MASK, ~0); + qce_write(qce, REG_CNTR_MASK0, ~0); + qce_write(qce, REG_CNTR_MASK1, ~0); + qce_write(qce, REG_CNTR_MASK2, ~0); + } + + /* Clear authentication IV and KEY registers of previous values */ + qce_clear_array(qce, REG_AUTH_IV0, 16); + qce_clear_array(qce, REG_AUTH_KEY0, 16); + + /* Clear byte count */ + qce_clear_array(qce, REG_AUTH_BYTECNT0, 4); + + /* Write authentication key */ + qce_cpu_to_be32p_array(authkey, ctx->auth_key, auth_keylen); + authkey_words = DIV_ROUND_UP(auth_keylen, sizeof(u32)); + qce_write_array(qce, REG_AUTH_KEY0, (u32 *)authkey, authkey_words); + + if (IS_SHA_HMAC(rctx->flags)) { + /* Write default authentication iv */ + if (IS_SHA1_HMAC(rctx->flags)) { + auth_ivsize = SHA1_DIGEST_SIZE; + memcpy(authiv, std_iv_sha1, auth_ivsize); + } else if (IS_SHA256_HMAC(rctx->flags)) { + auth_ivsize = SHA256_DIGEST_SIZE; + memcpy(authiv, std_iv_sha256, auth_ivsize); + } + authiv_words = auth_ivsize / sizeof(u32); + qce_write_array(qce, REG_AUTH_IV0, (u32 *)authiv, authiv_words); + } + + if (IS_CCM(rctx->flags)) { + qce_cpu_to_be32p_array(authnonce, rctx->ccm_nonce, QCE_MAX_NONCE); + authnonce_words = QCE_MAX_NONCE / sizeof(u32); + qce_write_array(qce, REG_AUTH_INFO_NONCE0, (u32 *)authnonce, authnonce_words); + } + + /* Set up ENCR_SEG_CFG */ + encr_cfg = qce_encr_cfg(flags, enc_keylen); + if (IS_ENCRYPT(flags)) + encr_cfg |= BIT(ENCODE_SHIFT); + qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg); + + /* Set up AUTH_SEG_CFG */ + auth_cfg = qce_auth_cfg(rctx->flags, auth_keylen, ctx->authsize); + auth_cfg |= BIT(AUTH_LAST_SHIFT); + auth_cfg |= BIT(AUTH_FIRST_SHIFT); + if (IS_ENCRYPT(flags)) { + if (IS_CCM(rctx->flags)) + auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT; + else + auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT; + } else { + if (IS_CCM(rctx->flags)) + auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT; + else + auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT; + } + qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg); + + totallen = rctx->cryptlen + rctx->assoclen; + + /* Set the encryption size and start offset */ + if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags)) + qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize); + else + qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen); + qce_write(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff); + + /* Set the authentication size and start offset */ + qce_write(qce, REG_AUTH_SEG_SIZE, totallen); + qce_write(qce, REG_AUTH_SEG_START, 0); + + /* Write total length */ + if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags)) + qce_write(qce, REG_SEG_SIZE, totallen + ctx->authsize); + else + qce_write(qce, REG_SEG_SIZE, totallen); + + /* get little endianness */ + config = qce_config_reg(qce, 1); + qce_write(qce, REG_CONFIG, config); + + /* Start the process */ + if (IS_CCM(flags)) + qce_crypto_go(qce, 0); + else + qce_crypto_go(qce, 1); + + return 0; +} +#endif + int qce_start(struct crypto_async_request *async_req, u32 type) { switch (type) { @@ -396,6 +543,10 @@ int qce_start(struct crypto_async_request *async_req, u32 type) #ifdef CONFIG_CRYPTO_DEV_QCE_SHA case CRYPTO_ALG_TYPE_AHASH: return qce_setup_regs_ahash(async_req); +#endif +#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD + case CRYPTO_ALG_TYPE_AEAD: + return qce_setup_regs_aead(async_req); #endif default: return -EINVAL; From patchwork Thu Feb 25 18:27:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thara Gopinath X-Patchwork-Id: 387237 Delivered-To: patch@linaro.org Received: by 2002:a02:290e:0:0:0:0:0 with SMTP id p14csp545370jap; Thu, 25 Feb 2021 10:30:10 -0800 (PST) X-Google-Smtp-Source: ABdhPJxsWGMkqZz6BWgiuea0t53u0TEUw0C30APy+wCTnr/c6/xGD6P5WZtzY2soh+dbzMzMR9CD X-Received: by 2002:a17:906:2803:: with SMTP id r3mr3944152ejc.50.1614277810258; Thu, 25 Feb 2021 10:30:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614277810; cv=none; d=google.com; s=arc-20160816; b=nBWfQ/zLHDGWFkoLCIuHj81I7WQewfr9E/jQCW+5mNS8meaIet13APP98DLincOSvc sbtvKhcLAjoQMSycwGdH//+3uU9COwrw8oZUp7ftilwWTVKQ/jxhEuqk3UkGy3zoztZ4 WoRrfDu0W1Tsqyzxjd4w0UrB/GyUUxezu05gGapBZ8THAdVZWxYBlwcETqv7DNP3jtnx lORvCEa+oYqrhHzQiYlENJGsF0FnqQgbo2HO83f+d35Xpl6NpplRG4WwQB48onp+LAok BZMMQo+qxdigrWQzrYaOAZyS86GF7i/wvHSaw+mjL49fcW7/7/Sq0Z9fjbymfK64jh9R TuBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=K8qzTDRvDqE8beEmzlw5Ro8ITdxfmMURdYMnM5S+IyE=; b=KuyjVaz1nUKdWBbXJV58Z8N/Q8qpzMxqWaQy31iTimbBzYAy/7P215kRLHIFUwxLh7 Xv2prRsgP6A5iwORYEMc37dFa3ufknDzY/aHUJe+VKsVRCucRL6ZqIDShYy2Aql6npBs hBy0DJDWMUinEqb6VebKsoS8Z9Wn0C9uVRMFyeSiQJ9bPmy0EI6mkegg13CCV2WCUkmL hLSJs3taM2z70kxIUJ1NtssMCXE8jepM1kTTVzSjzGggfu+YEA2hCYsdJGvWIJuU01yr sgOsEOxiGToWO54uKkK9auHObF4sEivwOdGqJBT19BYJQYep7mZ8q11gUTEcuE1eeheM BDGw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=h+Q1XSl0; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si445986edv.24.2021.02.25.10.30.10; Thu, 25 Feb 2021 10:30:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=h+Q1XSl0; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233502AbhBYS3n (ORCPT + 2 others); Thu, 25 Feb 2021 13:29:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232394AbhBYS3B (ORCPT ); Thu, 25 Feb 2021 13:29:01 -0500 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53173C06121D for ; Thu, 25 Feb 2021 10:27:24 -0800 (PST) Received: by mail-qv1-xf34.google.com with SMTP id 2so3270172qvd.0 for ; Thu, 25 Feb 2021 10:27:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K8qzTDRvDqE8beEmzlw5Ro8ITdxfmMURdYMnM5S+IyE=; b=h+Q1XSl04Nzrd+2U2F0w6/LaJqEAvLRf+WZmIxdkjyk1DLN1dygniaGYvNl0sVQ0FM AOj99fJI8uTPVhfy38PSj6uBbnmvPnsOYC0aMhet8dDzhUADC4fV3H+bvfazVieOFafw dgAb8WwcbiWPMhzTAFyKeAkRbvHe/Z+evdApivRPuCqsPP3leyM879JxXJmV117UD0xW //VUqP1GG1uGO8MR3jInaDy+tA40KBfWDxZ267/vtWQzkfglFXrBuu+pppvrErIDjO74 fhnb+Myys7oELCLBvJ0bh/aF3mdsyE7C4QnKdq11TWwFEtwhcNf8S/nFG1OlDaF4NenE GG7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K8qzTDRvDqE8beEmzlw5Ro8ITdxfmMURdYMnM5S+IyE=; b=jeGmqskhdtPQ5KaXAPqdBc2z/vlArmR+2UVd9JK5B/agk55HNy31Y3ZF3fsAqUztZL uZon6ymxA/Tb7H54SK+QBHaHUBwO9JcoEhkhksajQKnrGKPDRkIuRo/kFsEM7Q6+B+KP ljyI/0G0S8b1Ba5MXIwBmglh9awubTRBeWx46sXWsDs1KondJ+jje+A1rvsnSfTiid3a iwZzOKaOLncLOdCe3iV60HhAAG2uyvT0zPTQZ5o/7kll8NTagITvzgprOHFnbEaPudRD x5zEYJoyCvyonXzxerN9OXqjLsCv/QjII8ZojolJb3pBuv1D77Fls5AhfGIsQxtoBGKl o+BQ== X-Gm-Message-State: AOAM530W57iYIXqRgbc+5KF2inSNEQmWPuqXcf7jTAJysDieI1loLdIy oZX1hIkO+/IijIvavhkMtY67jA== X-Received: by 2002:a0c:eac9:: with SMTP id y9mr2955448qvp.58.1614277643557; Thu, 25 Feb 2021 10:27:23 -0800 (PST) Received: from pop-os.fios-router.home (pool-71-163-245-5.washdc.fios.verizon.net. [71.163.245.5]) by smtp.googlemail.com with ESMTPSA id l65sm4519678qkf.113.2021.02.25.10.27.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Feb 2021 10:27:23 -0800 (PST) From: Thara Gopinath To: herbert@gondor.apana.org.au, davem@davemloft.net, bjorn.andersson@linaro.org Cc: ebiggers@google.com, ardb@kernel.org, sivaprak@codeaurora.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 7/7] crypto: qce: aead: Schedule fallback algorithm Date: Thu, 25 Feb 2021 13:27:16 -0500 Message-Id: <20210225182716.1402449-8-thara.gopinath@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210225182716.1402449-1-thara.gopinath@linaro.org> References: <20210225182716.1402449-1-thara.gopinath@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Qualcomm crypto engine does not handle the following scenarios and will issue an abort. In such cases, pass on the transformation to a fallback algorithm. - DES3 algorithms with all three keys same. - AES192 algorithms. - 0 length messages. Signed-off-by: Thara Gopinath --- drivers/crypto/qce/aead.c | 58 ++++++++++++++++++++++++++++++++------- drivers/crypto/qce/aead.h | 3 ++ 2 files changed, 51 insertions(+), 10 deletions(-) -- 2.25.1 diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c index b594c4bb2640..4c2d024e5296 100644 --- a/drivers/crypto/qce/aead.c +++ b/drivers/crypto/qce/aead.c @@ -492,7 +492,20 @@ static int qce_aead_crypt(struct aead_request *req, int encrypt) /* CE does not handle 0 length messages */ if (!rctx->cryptlen) { if (!(IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))) - return -EINVAL; + ctx->need_fallback = true; + } + + /* If fallback is needed, schedule and exit */ + if (ctx->need_fallback) { + aead_request_set_tfm(&rctx->fallback_req, ctx->fallback); + aead_request_set_callback(&rctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + aead_request_set_ad(&rctx->fallback_req, req->assoclen); + + return encrypt ? crypto_aead_encrypt(&rctx->fallback_req) : + crypto_aead_decrypt(&rctx->fallback_req); } /* @@ -533,7 +546,7 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key, memcpy(ctx->ccm4309_salt, key + keylen, QCE_CCM4309_SALT_SIZE); } - if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256) + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256 && keylen != AES_KEYSIZE_192) return -EINVAL; ctx->enc_keylen = keylen; @@ -542,7 +555,12 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key, memcpy(ctx->enc_key, key, keylen); memcpy(ctx->auth_key, key, keylen); - return 0; + if (keylen == AES_KEYSIZE_192) + ctx->need_fallback = true; + + return IS_CCM_RFC4309(flags) ? + crypto_aead_setkey(ctx->fallback, key, keylen + QCE_CCM4309_SALT_SIZE) : + crypto_aead_setkey(ctx->fallback, key, keylen); } static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) @@ -573,20 +591,21 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int * The crypto engine does not support any two keys * being the same for triple des algorithms. The * verify_skcipher_des3_key does not check for all the - * below conditions. Return -EINVAL in case any two keys - * are the same. Revisit to see if a fallback cipher - * is needed to handle this condition. + * below conditions. Schedule fallback in this case. */ memcpy(_key, authenc_keys.enckey, DES3_EDE_KEY_SIZE); if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) || !((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) || !((_key[0] ^ _key[4]) | (_key[1] ^ _key[5]))) - return -EINVAL; + ctx->need_fallback = true; } else if (IS_AES(flags)) { /* No random key sizes */ if (authenc_keys.enckeylen != AES_KEYSIZE_128 && + authenc_keys.enckeylen != AES_KEYSIZE_192 && authenc_keys.enckeylen != AES_KEYSIZE_256) return -EINVAL; + if (authenc_keys.enckeylen == AES_KEYSIZE_192) + ctx->need_fallback = true; } ctx->enc_keylen = authenc_keys.enckeylen; @@ -597,7 +616,7 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int memset(ctx->auth_key, 0, sizeof(ctx->auth_key)); memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen); - return 0; + return crypto_aead_setkey(ctx->fallback, key, keylen); } static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize) @@ -612,15 +631,32 @@ static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize) return -EINVAL; } ctx->authsize = authsize; - return 0; + + return crypto_aead_setauthsize(ctx->fallback, authsize); } static int qce_aead_init(struct crypto_aead *tfm) { + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + + ctx->need_fallback = false; + ctx->fallback = crypto_alloc_aead(crypto_tfm_alg_name(&tfm->base), + 0, CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(ctx->fallback)) + return PTR_ERR(ctx->fallback); + crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx)); return 0; } +static void qce_aead_exit(struct crypto_aead *tfm) +{ + struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); + + crypto_free_aead(ctx->fallback); +} + struct qce_aead_def { unsigned long flags; const char *name; @@ -718,11 +754,13 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi alg->encrypt = qce_aead_encrypt; alg->decrypt = qce_aead_decrypt; alg->init = qce_aead_init; + alg->exit = qce_aead_exit; alg->base.cra_priority = 300; alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY; + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK; alg->base.cra_ctxsize = sizeof(struct qce_aead_ctx); alg->base.cra_alignmask = 0; alg->base.cra_module = THIS_MODULE; diff --git a/drivers/crypto/qce/aead.h b/drivers/crypto/qce/aead.h index 3d1f2039930b..efb8477cc088 100644 --- a/drivers/crypto/qce/aead.h +++ b/drivers/crypto/qce/aead.h @@ -19,6 +19,8 @@ struct qce_aead_ctx { unsigned int enc_keylen; unsigned int auth_keylen; unsigned int authsize; + bool need_fallback; + struct crypto_aead *fallback; }; struct qce_aead_reqctx { @@ -39,6 +41,7 @@ struct qce_aead_reqctx { u8 ccm_nonce[QCE_MAX_NONCE]; u8 ccmresult_buf[QCE_BAM_BURST_SIZE]; u8 ccm_rfc4309_iv[QCE_MAX_IV_SIZE]; + struct aead_request fallback_req; }; static inline struct qce_alg_template *to_aead_tmpl(struct crypto_aead *tfm)