From patchwork Tue May 10 11:07:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 572227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E87C433EF for ; Tue, 10 May 2022 11:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240711AbiEJLLh (ORCPT ); Tue, 10 May 2022 07:11:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240804AbiEJLLd (ORCPT ); Tue, 10 May 2022 07:11:33 -0400 Received: from fornost.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 546E9106365; Tue, 10 May 2022 04:07:36 -0700 (PDT) Received: from gwarestrin.arnor.me.apana.org.au ([192.168.103.7]) by fornost.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1noNi1-00BzVg-AK; Tue, 10 May 2022 21:07:18 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 10 May 2022 19:07:17 +0800 From: "Herbert Xu" Date: Tue, 10 May 2022 19:07:17 +0800 Subject: [RFC PATCH 5/7] crypto: skcipher - Add ctx helpers with DMA alignment References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds helpers to access the skcipher context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu --- include/crypto/internal/skcipher.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index a2339f80a6159..719822e42dc46 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -122,6 +122,15 @@ static inline void crypto_skcipher_set_reqsize( skcipher->reqsize = reqsize; } +static inline void crypto_skcipher_set_reqsize_dma( + struct crypto_skcipher *skcipher, unsigned int reqsize) +{ +#ifdef ARCH_DMA_MINALIGN + reqsize += ARCH_DMA_MINALIGN & ~(crypto_tfm_ctx_alignment() - 1); +#endif + skcipher->reqsize = reqsize; +} + int crypto_register_skcipher(struct skcipher_alg *alg); void crypto_unregister_skcipher(struct skcipher_alg *alg); int crypto_register_skciphers(struct skcipher_alg *algs, int count); @@ -151,11 +160,30 @@ static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm) return crypto_tfm_ctx(&tfm->base); } +static inline void *crypto_skcipher_ctx_dma(struct crypto_skcipher *tfm) +{ + return crypto_tfm_ctx_dma(&tfm->base); +} + static inline void *skcipher_request_ctx(struct skcipher_request *req) { return req->__ctx; } +static inline void *skcipher_request_ctx_dma(struct skcipher_request *req) +{ + unsigned int align = 1; + +#ifdef ARCH_DMA_MINALIGN + align = ARCH_DMA_MINALIGN; +#endif + + if (align <= crypto_tfm_ctx_alignment()) + align = 1; + + return PTR_ALIGN(skcipher_request_ctx(req), align); +} + static inline u32 skcipher_request_flags(struct skcipher_request *req) { return req->base.flags;