From patchwork Sun Jan 5 19:34:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855320 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D9CE1448C7 for ; Sun, 5 Jan 2025 19:34:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=ngqO8lELII4PKsJ4NXM2FzDxGNCUAT6Ep+Ww2mAIdq3LFlTzW5Y5SGEi3hIYuhWifYQj4mByQyQl7825lYvCMmWQSuZWsR7UcCbkZCgrejnAkt/dOGHUgA2EmkqemB4CVZ71Wuc3NAT/fF9JnzrzNe8bVKsmkUrh8yugpjM+ZzU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XlftjJFg1VcDLgicYMSGy2+3Y7CxtzLXmDqWlRtgKcxpzTq3t3d0CczfIxC7kq/5ra6HVy8hDst4Y07PnWTY/89JQNgdkPVyhtgFPTQxB010uZtOqVTzzEUH61LxfFhG/QfQ80H3AZSLquEC0zyw1WP+WvQ3g2fdp5gzdx1kZjs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VKbGZZzU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VKbGZZzU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3973C4CED2 for ; Sun, 5 Jan 2025 19:34:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105695; bh=5+ZF0ImlbQo//JHYBtR4hUhu8cd/k3zSglYQL5MyI1I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=VKbGZZzUkTAcpdQ96jRsADXSorgzdmBvpoEhl5dx+UdpyfJtnV89WOgEYAjL4mR7N SfSv5A9fy2Mli7JiYaUqZEKTqG5hhRbHE1Tv1XNX32G8VAV8K0lLkVrqZ5y3V5ZSxp +vTuFT6t5veW8EuGOmBkrF25shHIbKThRDxTP4O7/+SvKukS83ej74E2A/Usa33BOY 7OOGCUxJzEf/YWI7ICm+gsP7IOj704KM4M3t3per7DYeHsZycD7kK8bAfm7DwozfVn gYcEtvcqXzakG8v2ERcwnb/yrB+jXmeEBNSjFjAdMe1tp5lzduvHvFGmqlsCZH/FbB t/ubEu1Y3C++Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 1/8] crypto: skcipher - document skcipher_walk_done() and rename some vars Date: Sun, 5 Jan 2025 11:34:09 -0800 Message-ID: <20250105193416.36537-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers skcipher_walk_done() has an unusual calling convention, and some of its local variables have unclear names. Document it and rename variables to make it a bit clearer what is going on. No change in behavior. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 50 ++++++++++++++++++++---------- include/crypto/internal/skcipher.h | 2 +- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index d5fe0eca3826..8749c44f98a2 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -87,21 +87,39 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } -int skcipher_walk_done(struct skcipher_walk *walk, int err) +/** + * skcipher_walk_done() - finish one step of a skcipher_walk + * @walk: the skcipher_walk + * @res: number of bytes *not* processed (>= 0) from walk->nbytes, + * or a -errno value to terminate the walk due to an error + * + * This function cleans up after one step of walking through the source and + * destination scatterlists, and advances to the next step if applicable. + * walk->nbytes is set to the number of bytes available in the next step, + * walk->total is set to the new total number of bytes remaining, and + * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If there + * is no more data, or if an error occurred (i.e. -errno return), then + * walk->nbytes and walk->total are set to 0 and all resources owned by the + * skcipher_walk are freed. + * + * Return: 0 or a -errno value. If @res was a -errno value then it will be + * returned, but other errors may occur too. + */ +int skcipher_walk_done(struct skcipher_walk *walk, int res) { - unsigned int n = walk->nbytes; - unsigned int nbytes = 0; + unsigned int n = walk->nbytes; /* num bytes processed this step */ + unsigned int total = 0; /* new total remaining */ if (!n) goto finish; - if (likely(err >= 0)) { - n -= err; - nbytes = walk->total - n; + if (likely(res >= 0)) { + n -= res; /* subtract num bytes *not* processed */ + total = walk->total - n; } if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF)))) { @@ -113,35 +131,35 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) } else if (walk->flags & SKCIPHER_WALK_COPY) { skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { - if (err > 0) { + if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but * the algorithm requires it. */ - err = -EINVAL; - nbytes = 0; + res = -EINVAL; + total = 0; } else n = skcipher_done_slow(walk, n); } - if (err > 0) - err = 0; + if (res > 0) + res = 0; - walk->total = nbytes; + walk->total = total; walk->nbytes = 0; scatterwalk_advance(&walk->in, n); scatterwalk_advance(&walk->out, n); - scatterwalk_done(&walk->in, 0, nbytes); - scatterwalk_done(&walk->out, 1, nbytes); + scatterwalk_done(&walk->in, 0, total); + scatterwalk_done(&walk->out, 1, total); - if (nbytes) { + if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); return skcipher_walk_next(walk); } @@ -156,11 +174,11 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err) kfree(walk->buffer); if (walk->page) free_page((unsigned long)walk->page); out: - return err; + return res; } EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index 08d1e8c63afc..4f49621d3eb6 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -194,11 +194,11 @@ void crypto_unregister_lskcipher(struct lskcipher_alg *alg); int crypto_register_lskciphers(struct lskcipher_alg *algs, int count); void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count); int lskcipher_register_instance(struct crypto_template *tmpl, struct lskcipher_instance *inst); -int skcipher_walk_done(struct skcipher_walk *walk, int err); +int skcipher_walk_done(struct skcipher_walk *walk, int res); int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic); int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic); From patchwork Sun Jan 5 19:34:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855185 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4819314A4FB for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=S22A3fdmHYM6bBBHK5phDBzAQ4lWmeZG8cUy391r4vblK/xnMizJ4K5SSr5rx+QP4ZierdY8sYRmK6rj6fIttO/YSt1xB5pf3fRseE27/muJ84UthYSActPeacp6b/lrneTa0qFOu1KM1A65VtIikiBIitts9TQfGdMSEDHnsYU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fpKV2OMM9PBxN8og0ehrllWnKU+CFl00UFthi9AmblrDiu5aJ/kHQGzEUHkBDromVuNGM/epc4lGWikXUqupV2hwgRj1EceHE1ulMxJjaUNxQ4N5OLv/rMOZgtNsmfiudbLhVJZWuH5XNixfCdtvz0lw1bvmAFCpuc61zNUot80= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JpUV5vkL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JpUV5vkL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2D0FC4CEE1 for ; Sun, 5 Jan 2025 19:34:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=J5FzviwCD+n8p+5rLyvIFWAP97F/jKX49Nuk3AnfzpU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=JpUV5vkLTyAQAA8JJcKGzyNhC+AyKDXXzoNfN8mZPKrO2zLHDE9Eqx2TXun44/xYW kW4LVXf/reT0iMiMZqCRjOEc3hTvs8ce7sXtqlkk4aEIzhDbYxH2bK2iEkm7L2q6Th hrOcmwYI253VEBmoCddh1CJue6J1xBFVya0IvEo8xaZBFtSGQHo24NSnkYbxzGAb5O Pt5aG2Lio6X2J1Es1emh/a0F1LCeH7AUVty4GF9+ueqPZ5OgQN2DB3m7b6TRhrUeov CCtEH9IGiCBAzrDJpajQxS+tI3zIRko1ZDFJ0pnXz1DA3KyeSfdRA9kbV5ZPLfy2co v5LK9v6kKtEFQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 2/8] crypto: skcipher - remove unnecessary page alignment of bounce buffer Date: Sun, 5 Jan 2025 11:34:10 -0800 Message-ID: <20250105193416.36537-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the slow path of skcipher_walk where it uses a slab bounce buffer for the data and/or IV, do not bother to avoid crossing a page boundary in the part(s) of this buffer that are used, and do not bother to allocate extra space in the buffer for that purpose. The buffer is accessed only by virtual address, so pages are irrelevant for it. This logic may have been present due to the physical address support in skcipher_walk, but that has now been removed. Or it may have been present to be consistent with the fast path that currently does not hand back addresses that span pages, but that behavior is a side effect of the pages being "mapped" one by one and is not actually a requirement. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 62 ++++++++++++----------------------------------- 1 file changed, 15 insertions(+), 47 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 8749c44f98a2..887cbce8f78d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -61,32 +61,20 @@ static inline void skcipher_unmap_dst(struct skcipher_walk *walk) static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) { return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; } -/* Get a spot of the specified length that does not straddle a page. - * The caller needs to ensure that there is enough space for this operation. - */ -static inline u8 *skcipher_get_spot(u8 *start, unsigned int len) -{ - u8 *end_page = (u8 *)(((unsigned long)(start + len - 1)) & PAGE_MASK); - - return max(start, end_page); -} - static inline struct skcipher_alg *__crypto_skcipher_alg( struct crypto_alg *alg) { return container_of(alg, struct skcipher_alg, base); } static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize) { - u8 *addr; + u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1); - addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1); - addr = skcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return 0; } /** @@ -181,37 +169,26 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) EXPORT_SYMBOL_GPL(skcipher_walk_done); static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) { unsigned alignmask = walk->alignmask; - unsigned a; unsigned n; u8 *buffer; if (!walk->buffer) walk->buffer = walk->page; buffer = walk->buffer; - if (buffer) - goto ok; - - /* Start with the minimum alignment of kmalloc. */ - a = crypto_tfm_ctx_alignment() - 1; - n = bsize; - - /* Minimum size to align buffer by alignmask. */ - n += alignmask & ~a; - - /* Minimum size to ensure buffer does not straddle a page. */ - n += (bsize - 1) & ~(alignmask | a); - - buffer = kzalloc(n, skcipher_walk_gfp(walk)); - if (!buffer) - return skcipher_walk_done(walk, -ENOMEM); - walk->buffer = buffer; -ok: + if (!buffer) { + /* Min size for a buffer of bsize bytes aligned to alignmask */ + n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); + + buffer = kzalloc(n, skcipher_walk_gfp(walk)); + if (!buffer) + return skcipher_walk_done(walk, -ENOMEM); + walk->buffer = buffer; + } walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1); - walk->dst.virt.addr = skcipher_get_spot(walk->dst.virt.addr, bsize); walk->src.virt.addr = walk->dst.virt.addr; scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0); walk->nbytes = bsize; @@ -294,34 +271,25 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { - unsigned a = crypto_tfm_ctx_alignment() - 1; unsigned alignmask = walk->alignmask; unsigned ivsize = walk->ivsize; - unsigned bs = walk->stride; - unsigned aligned_bs; + unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1); unsigned size; u8 *iv; - aligned_bs = ALIGN(bs, alignmask + 1); - - /* Minimum size to align buffer by alignmask. */ - size = alignmask & ~a; - - size += aligned_bs + ivsize; - - /* Minimum size to ensure buffer does not straddle a page. */ - size += (bs - 1) & ~(alignmask | a); + /* Min size for a buffer of stride + ivsize, aligned to alignmask */ + size = aligned_stride + ivsize + + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); walk->buffer = kmalloc(size, skcipher_walk_gfp(walk)); if (!walk->buffer) return -ENOMEM; - iv = PTR_ALIGN(walk->buffer, alignmask + 1); - iv = skcipher_get_spot(iv, bs) + aligned_bs; + iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; walk->iv = memcpy(iv, walk->iv, walk->ivsize); return 0; } From patchwork Sun Jan 5 19:34:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855184 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 942541B4132 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=NrmsMTEXh9/y7RQYDNfAuODKaOZ/n+R2eM92mJOKnejiuZWV0qGN/jJxkcszMmkAcbDa8i9u8l3M9IR9j9qMhXt89vv6KmnoA4Q1tC9aX6my29YPqtfkoW+eKrUzt9woPrak/k8hlHfuco83EuA8qgj4YOM4tcmUbR0skhUZ3Bs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k6rB+A6lT/facC+04E5hXCGu/fQWTlwmrInwiQ6f/98nv+TTh9aigWC1OYgASRl5UctfI1DYVnSJsAE5FKSYVuuhrIYhqO+S7C3V4WKRm325k0dduIbkGUO/VQkBxiGIwBruOZJGQLcLKwWb2esUQ9F6KeUAZD6PP6mfUIxyBfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dFdC0djC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dFdC0djC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DDE9C4CEE0 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=l9HLRzTyaTzJjaI3UCqrNcL/jvjXdUpirL97KFLMmiQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=dFdC0djCLPdRewWv5b+jEMW03Fz/+ejIq2IMWL6yT2XMngBhWI8U1QQCEJztt929s MDcEFsEqYZmlkNK20laTU47Y/mZ1G024yQ/NGVvbNBgrokzqHrlLUcLdm6ozZvujY5 0o1rtF/VJLeRRTxZ2o5Fsiin3jQL9wwhsAMzjD6oFhA67DjN/mS/eSmC3oQ4VBwfGr tvW5JQsqW2cLV16DlhyDit7MrVW07ADKC4S8sLyku/3OtwpgismBKsOlJAmURF/T4k xDxVxg8t2/e92CSGBC47ql9EyYg1ZqxCEkp4XLRnmKHd6CXMKEjDyC5pq0WTGuWNon CFI4XTpg7lGww== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 3/8] crypto: skcipher - remove redundant clamping to page size Date: Sun, 5 Jan 2025 11:34:11 -0800 Message-ID: <20250105193416.36537-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In the case where skcipher_walk_next() allocates a bounce page, that page by definition has size PAGE_SIZE. The number of bytes to copy 'n' is guaranteed to fit in it, since earlier in the function it was clamped to be at most a page. Therefore remove the unnecessary logic that tried to clamp 'n' again to fit in the bounce page. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 887cbce8f78d..c627e267b125 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -248,28 +248,24 @@ static int skcipher_walk_next(struct skcipher_walk *walk) return skcipher_walk_done(walk, -EINVAL); slow_path: return skcipher_next_slow(walk, bsize); } + walk->nbytes = n; if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { if (!walk->page) { gfp_t gfp = skcipher_walk_gfp(walk); walk->page = (void *)__get_free_page(gfp); if (!walk->page) goto slow_path; } - - walk->nbytes = min_t(unsigned, n, - PAGE_SIZE - offset_in_page(walk->page)); walk->flags |= SKCIPHER_WALK_COPY; return skcipher_next_copy(walk); } - walk->nbytes = n; - return skcipher_next_fast(walk); } static int skcipher_copy_iv(struct skcipher_walk *walk) { From patchwork Sun Jan 5 19:34:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855319 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77B1018C02E for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=L8Zui6vSVzsrTPy+ooqxCpa7UpZ9ek6nV2GH4Oc/kpXrBaZlImrEDIfp/fXoU5ftJjGRK5sS7DebjVbg1HNBBZU1QVM9wsDzaUC9jwLfLIMwdwxvyBMU4pMB42SNmwRi5wLNP/ZWU7aLOtA9zXJtLBft+HpfFxciZIb1QhnVYbs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=7uVm3CYoqa7+cTgqCe8InwCLob5UXrzsedA8gOn7b/Y=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ATO+2OBjj4IClj6nmbxEfCe3yxbTK878mR9aHWMrSMdDxlNYZ9De+ZHWKRUEzaxBRs3juDS8IgXFHbD2wPOOmaI/ef1hffruozVp8pHm9+bFKtq+RDYrM4jyzuM3gu8JZwVcddgSX53AhUmI5ObadUB8jqeF/8ftcUH12I3N6nE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s14MzxkF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s14MzxkF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4CFCBC4CEE2 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=7uVm3CYoqa7+cTgqCe8InwCLob5UXrzsedA8gOn7b/Y=; h=From:To:Subject:Date:In-Reply-To:References:From; b=s14MzxkFDwdMSQLaOWGQ5ncexJNgNlXpqi9zSHDWE0qL7GJlmeXUMLTVnarbihps7 E2nIwZShpPOcvt7bfYBsvOJp/QebX7ma3KuB2Ku6F5OdEnCvmfzCXa9+HRY0T+BiLY J86XDx5gRTK396hVQGFQJ/WjrCoGdgFJa2BUk1xB3utfrFLqE8oSV0lnPQeMRmaPAX bjTTFW9TmYM7aBuhS+bjkg5uh+bfWMZUiC6TsTeARDoqd9j1c+G7iAxU3vtv69YCLl 3bkIN/gh8RttyDt9+zfuYiECZhYhFGG0ZODmu+YxZhEgZVSU8cA+uSzFNyHQ48LSkk +BZDPvkX3pHJg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 4/8] crypto: skcipher - remove redundant check for SKCIPHER_WALK_SLOW Date: Sun, 5 Jan 2025 11:34:12 -0800 Message-ID: <20250105193416.36537-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_done(), remove the check for SKCIPHER_WALK_SLOW because it is always true. All other flags (and lack thereof) were checked earlier in the function, leaving SKCIPHER_WALK_SLOW as the only remaining possibility. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index c627e267b125..98606def1bf9 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -118,11 +118,11 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) goto unmap_src; } else if (walk->flags & SKCIPHER_WALK_COPY) { skcipher_map_dst(walk); memcpy(walk->dst.virt.addr, walk->page, n); skcipher_unmap_dst(walk); - } else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) { + } else { /* SKCIPHER_WALK_SLOW */ if (res > 0) { /* * Didn't process all bytes. Either the algorithm is * broken, or this was the last step and it turned out * the message wasn't evenly divisible into blocks but From patchwork Sun Jan 5 19:34:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855318 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2B811BDA9B for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=kvEGVHu8pc7gGAhSOdpaf3Ycwmp1te0aLkM6eJNWhY4266V1HstT5/c6X+xaGM1QxFGsOl4eMp3OFGdWvFLAgf1B1Nm6yoNvoQPKNqYH7GSSMCDPlgJRRkZ6MjgvHajm8TxrvtPhmYXcNS8S+nmFu8H/DJxyFrNGn+XFjc0TMaE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=+GPayxh5RqGj7KlYP7lL3+3fPmB0J8TTLnx1xFmFjIM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zhn4PXVjM/Vjw4yy55Sp/0TzV1OxyFCNrBMiopr+sEHer2zr5gmtiW/k9f0yUMKe1PyXuBm6Q5DHYCb5uJLcur9XHfbU3z9aU6z17Pv5WBMa4JnnN+2ZjC6dUchAAJbaj6rIO70q2LIKi+SWi3Vamappl5EDYcjwbj0N68UB/JI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BSgxh+32; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BSgxh+32" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C588C4CED0 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=+GPayxh5RqGj7KlYP7lL3+3fPmB0J8TTLnx1xFmFjIM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=BSgxh+323AK3OHoKZ8nbEWyciGj10VrBQanD8kP2jzJC2meQT3eqVIa5FXWTYwiXz 4ywUAkAyDc8gECUhSRfsWXZ8v0HlnL/eBigX1QpG0YOJwktKkgfu1EdYjJ3+nB0WgR kjLxcEV8/IfCge2snfI1bQLrGVcaDfO83cRVzAC7YTBjhkdKaKaRAYJc79XdBLhqjO p5JmCaaSTWn6Mz0GZvQNBFCj5kWlits4Eh1zS18tGbiMVltKc5v9fLoDcEr9nvocOr so5pw8+IAOrbrEQT7JNYoR2tc67KLJFalTjVFMO/y4U0zQrktrKTXEyRMSiVBNtcnZ 3AwYAaoT0Yrkw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 5/8] crypto: skcipher - fold skcipher_walk_skcipher() into skcipher_walk_virt() Date: Sun, 5 Jan 2025 11:34:13 -0800 Message-ID: <20250105193416.36537-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Fold skcipher_walk_skcipher() into skcipher_walk_virt() which is its only remaining caller. No change in behavior. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 98606def1bf9..17f4bc79ca8b 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -304,23 +304,26 @@ static int skcipher_walk_first(struct skcipher_walk *walk) walk->page = NULL; return skcipher_walk_next(walk); } -static int skcipher_walk_skcipher(struct skcipher_walk *walk, - struct skcipher_request *req) +int skcipher_walk_virt(struct skcipher_walk *walk, + struct skcipher_request *req, bool atomic) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + int err = 0; + + might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; if (unlikely(!walk->total)) - return 0; + goto out; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); walk->flags &= ~SKCIPHER_WALK_SLEEP; @@ -334,22 +337,12 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk, if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; - return skcipher_walk_first(walk); -} - -int skcipher_walk_virt(struct skcipher_walk *walk, - struct skcipher_request *req, bool atomic) -{ - int err; - - might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); - - err = skcipher_walk_skcipher(walk, req); - + err = skcipher_walk_first(walk); +out: walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0; return err; } EXPORT_SYMBOL_GPL(skcipher_walk_virt); From patchwork Sun Jan 5 19:34:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855183 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D816E1CBE8C for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; cv=none; b=fGXduVaSG7Kt0fPPX1xvoCDKh2/aTEMUKsduog2n52EAOOcFz1G7vJF+DElud4wAnJRwXpq3BH9tOQE5Q0AOzGP7UaUEVf3t+3rwbWkmrh5pHMuFOg2vQytvpZaEt/6VBYl2SGh9Z4UMVWdle15LE2tpwNkIvBoQD3q/L33JwNo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105696; c=relaxed/simple; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l/1+k03XHDbmpsprl+pQ+KKzfaSW5bUvd80jbFU0n4cICZ8BPX8G5oodNXZuW9JltlgXlfoiro4+IJG1Ye1q+136+I7W5UeuMhpCemXGJnMTDlK/z+fSx+JXHNoeNZ7W15r+dGfokRYAl2BfCMyjIvveBDJBr9FtpPU47bz/QcI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Gf+wqBjn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Gf+wqBjn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AC336C4CED2 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=VsaYFM7nf9+iC1YnzN8b1pGbhUrUR9GOM+Yzo4COmtY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Gf+wqBjnv2rDJEA1iOjsA7wmtIg9Zp5XgmALYLp/FTZZ0tKofGVg6Q6F7u9kQ8/y+ epqHYNRrecAXU/k76hPJYGrUg8rfxN2uT43dBabiMcCuABRXk8+DBYpmE9MOkpUVSn JF79vw7tNQ3mbMZ4En/NNHohpc0Pu3qtxcgLl63597upZCFwJKLNe3Zxd6kZ65ZTHl 1V2X/MICpoWkErDijwthbhv26ACL6jaE55CGc9yB1f+EGjvZwNsmUUcHAD52GtCZxj NxDODHJZntYDuPBvQmquBVMa0VNAeMINjXK4KVsCIAL1x3uF2tsuuB74h+/VQURRAp bX6odk58BXw4Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 6/8] crypto: skcipher - clean up initialization of skcipher_walk::flags Date: Sun, 5 Jan 2025 11:34:14 -0800 Message-ID: <20250105193416.36537-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers - Initialize SKCIPHER_WALK_SLEEP in a consistent way, and check for atomic=true at the same time as CRYPTO_TFM_REQ_MAY_SLEEP. Technically atomic=true only needs to apply after the first step, but it is very rarely used. We should optimize for the common case. So, check 'atomic' alongside CRYPTO_TFM_REQ_MAY_SLEEP. This is more efficient. - Initialize flags other than SKCIPHER_WALK_SLEEP to 0 rather than preserving them. No caller actually initializes the flags, which makes it impossible to use their original values for anything. Indeed, that does not happen and all meaningful flags get overridden anyway. It may have been thought that just clearing one flag would be faster than clearing all flags, but that's not the case as the former is a read-write operation whereas the latter is just a write. - Move the explicit clearing of SKCIPHER_WALK_SLOW, SKCIPHER_WALK_COPY, and SKCIPHER_WALK_DIFF into skcipher_walk_done(), since it is now only needed on non-first steps. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 17f4bc79ca8b..e54d1ad46566 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -146,10 +146,12 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) scatterwalk_done(&walk->out, 1, total); if (total) { crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? CRYPTO_TFM_REQ_MAY_SLEEP : 0); + walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | + SKCIPHER_WALK_DIFF); return skcipher_walk_next(walk); } finish: /* Short-circuit for the common/fast path. */ @@ -233,13 +235,10 @@ static int skcipher_next_fast(struct skcipher_walk *walk) static int skcipher_walk_next(struct skcipher_walk *walk) { unsigned int bsize; unsigned int n; - walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | - SKCIPHER_WALK_DIFF); - n = walk->total; bsize = min(walk->stride, max(n, walk->blocksize)); n = scatterwalk_clamp(&walk->in, n); n = scatterwalk_clamp(&walk->out, n); @@ -309,55 +308,53 @@ static int skcipher_walk_first(struct skcipher_walk *walk) int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - int err = 0; might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) - goto out; + return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->flags |= req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? - SKCIPHER_WALK_SLEEP : 0; - walk->blocksize = crypto_skcipher_blocksize(tfm); walk->ivsize = crypto_skcipher_ivsize(tfm); walk->alignmask = crypto_skcipher_alignmask(tfm); if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; - err = skcipher_walk_first(walk); -out: - walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0; - - return err; + return skcipher_walk_first(walk); } EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - int err; walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) + walk->flags = SKCIPHER_WALK_SLEEP; + else + walk->flags = 0; if (unlikely(!walk->total)) return 0; scatterwalk_start(&walk->in, req->src); @@ -367,26 +364,16 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) - walk->flags |= SKCIPHER_WALK_SLEEP; - else - walk->flags &= ~SKCIPHER_WALK_SLEEP; - walk->blocksize = crypto_aead_blocksize(tfm); walk->stride = crypto_aead_chunksize(tfm); walk->ivsize = crypto_aead_ivsize(tfm); walk->alignmask = crypto_aead_alignmask(tfm); - err = skcipher_walk_first(walk); - - if (atomic) - walk->flags &= ~SKCIPHER_WALK_SLEEP; - - return err; + return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { From patchwork Sun Jan 5 19:34:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855317 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 612B91CDFAC for ; Sun, 5 Jan 2025 19:34:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105697; cv=none; b=N3eLtNh5FPf9GvZ4A7jDZhpMdeCe1C9z5IyU9/lcEpt+viLv08LxReMR73WNfPEb4fjjuPG5/pBC5AsUwmw8C+Wu2IUGMrCoFO7p6stDccY9aCLVNnaSr+65g3RSGKKpTEUHAgINdardVY+6/DlMbb0D8OFnOXq1mmBifIdLh7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105697; c=relaxed/simple; bh=8yRyZ1QAEdW46ahp8GvBm7MgvLA8k+bDmW4nCd3mjks=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l+drioRNP7i4+Jsm6YpMzPsGjQ/8BLIHNmN88yeObChMlDlU84rlllH5NiNEazz11/Sz/6X+FqkcE/JAcomuEBVHfiQVsFPEcFf6+QAs/gb97EQj3VwbMpoSHA4xclikczr/c0OGNkur7F3pplnqOo26bV5QbPZf08VQQ+28pgI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FzyD+Jq8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FzyD+Jq8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBBA6C4CEE3 for ; Sun, 5 Jan 2025 19:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105696; bh=8yRyZ1QAEdW46ahp8GvBm7MgvLA8k+bDmW4nCd3mjks=; h=From:To:Subject:Date:In-Reply-To:References:From; b=FzyD+Jq8YC1G0UEFNZcWMZafBMEgVKGtLeN+EdPSuh3abriRbsDRWc6RB+d9qgj2/ CpOSua1fyj1avfYrQvhTPVOGdubNnXTzypkB6pVMChAVn5nJXWclag/SycFUngzWcu UYqzjx5vNMe5AljaGqQzU+MQGBJLsYT4ofoAZPwd0kPUiU/6BWjlQF4vB+vcii+p+E +wa9oOxdhxUxIQxtZkeRC2ea2QQ6ZDfq5IaALMqdx8tCIx7FjvyCdsMyw3+YXdu1Ug 8857GSbvann7ThXqGhEihnXVTudYYOmbDC/GO5eK5XH3oi0YP0UIo6s3Ma3YInaXWA weuuCb0Qx2YPQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 7/8] crypto: skcipher - optimize initializing skcipher_walk fields Date: Sun, 5 Jan 2025 11:34:15 -0800 Message-ID: <20250105193416.36537-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers The helper functions like crypto_skcipher_blocksize() take in a pointer to a tfm object, but they actually return properties of the algorithm. As the Linux kernel is compiled with -fno-strict-aliasing, the compiler has to assume that the writes to struct skcipher_walk could clobber the tfm's pointer to its algorithm. Thus it gets repeatedly reloaded in the generated code. Therefore, replace the use of these helper functions with staightforward accesses to the struct fields. Note that while *users* of the skcipher and aead APIs are supposed to use the helper functions, this particular code is part of the API *implementation* in crypto/skcipher.c, which already accesses the algorithm struct directly in many cases. So there is no reason to prefer the helper functions here. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index e54d1ad46566..6b62d816f08d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -306,12 +306,12 @@ static int skcipher_walk_first(struct skcipher_walk *walk) } int skcipher_walk_virt(struct skcipher_walk *walk, struct skcipher_request *req, bool atomic) { - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + const struct skcipher_alg *alg = + crypto_skcipher_alg(crypto_skcipher_reqtfm(req)); might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP); walk->total = req->cryptlen; walk->nbytes = 0; @@ -326,13 +326,18 @@ int skcipher_walk_virt(struct skcipher_walk *walk, return 0; scatterwalk_start(&walk->in, req->src); scatterwalk_start(&walk->out, req->dst); - walk->blocksize = crypto_skcipher_blocksize(tfm); - walk->ivsize = crypto_skcipher_ivsize(tfm); - walk->alignmask = crypto_skcipher_alignmask(tfm); + /* + * Accessing 'alg' directly generates better code than using the + * crypto_skcipher_blocksize() and similar helper functions here, as it + * prevents the algorithm pointer from being repeatedly reloaded. + */ + walk->blocksize = alg->base.cra_blocksize; + walk->ivsize = alg->co.ivsize; + walk->alignmask = alg->base.cra_alignmask; if (alg->co.base.cra_type != &crypto_skcipher_type) walk->stride = alg->co.chunksize; else walk->stride = alg->walksize; @@ -342,11 +347,11 @@ int skcipher_walk_virt(struct skcipher_walk *walk, EXPORT_SYMBOL_GPL(skcipher_walk_virt); static int skcipher_walk_aead_common(struct skcipher_walk *walk, struct aead_request *req, bool atomic) { - struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct aead_alg *alg = crypto_aead_alg(crypto_aead_reqtfm(req)); walk->nbytes = 0; walk->iv = req->iv; walk->oiv = req->iv; if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) @@ -364,14 +369,19 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk, scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2); scatterwalk_done(&walk->in, 0, walk->total); scatterwalk_done(&walk->out, 0, walk->total); - walk->blocksize = crypto_aead_blocksize(tfm); - walk->stride = crypto_aead_chunksize(tfm); - walk->ivsize = crypto_aead_ivsize(tfm); - walk->alignmask = crypto_aead_alignmask(tfm); + /* + * Accessing 'alg' directly generates better code than using the + * crypto_aead_blocksize() and similar helper functions here, as it + * prevents the algorithm pointer from being repeatedly reloaded. + */ + walk->blocksize = alg->base.cra_blocksize; + walk->stride = alg->chunksize; + walk->ivsize = alg->ivsize; + walk->alignmask = alg->base.cra_alignmask; return skcipher_walk_first(walk); } int skcipher_walk_aead_encrypt(struct skcipher_walk *walk, From patchwork Sun Jan 5 19:34:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 855182 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96DEE1CDFAE for ; Sun, 5 Jan 2025 19:34:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105697; cv=none; b=ndq7OtU+SHYChPhRjOLLSlkAgpsApW1sVXC7NzSzTUpIBVlh7n/JnBacDM7K3NjqR/aTeHHim0675tZzI/S7mEHliXIMlV5mcIW61lZz336g8OmmW9aUWYAZrE82a/JTnyK8Blevzx7MD4QGXvKvrJfcPybnZrlB9wqxIIu4Wk4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736105697; c=relaxed/simple; bh=gaMOcbKZNVJXlRF5msh3AmHXHZVVwOpWrFZVmfdFAG8=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nW42hAM4GBII+HQ/sZlsE1ZuWSGNMU8cXSmwl7ThNnghMAszHFeTy9nVAZ8TMozW+cOIcAut5dFmJb4Cx3C7XebOypu7xxPagbNlU4mkYhE0DEjt3K9WDYaxtrrsWN9Qq6pULLIUG84cp0XE44KS/5qaWJYl82PWux7k0btSI1o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nBTSjf9U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nBTSjf9U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18DBFC4CEE0 for ; Sun, 5 Jan 2025 19:34:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736105697; bh=gaMOcbKZNVJXlRF5msh3AmHXHZVVwOpWrFZVmfdFAG8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=nBTSjf9U67DTDQmSmBBpwT/XFsg3iUM/2c8L9E6HrhiE5m4UrJgSxLlRszb1xybmw b2lUV60s59rw4BgKWIYKoefj7Q0OQ2w6H/6fK6vtXbxVgYwRMcYlWocec3fwPXWirB 0Q9EDKDfFUSlG7X0TBXITR4EwiAUX9HLa2aYs742WbyDnkqOrzx8+Rq4goTdKCUYxP tSK7QBE6fSdDDxkVtmgNJsy3J8uiHxClKhxJsICsrvTvfBQ05T7a1YcIkudf3Pf3jn zTPStg3nbh/K1VL2h8LGZsEdh2o1qKNhNeAh1jJ/tgMPnXvx/Hi82yNTMjFSyLTm5v gmHOnMMdrvC1A== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 8/8] crypto: skcipher - call cond_resched() directly Date: Sun, 5 Jan 2025 11:34:16 -0800 Message-ID: <20250105193416.36537-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250105193416.36537-1-ebiggers@kernel.org> References: <20250105193416.36537-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In skcipher_walk_done(), instead of calling crypto_yield() which requires a translation between flags, just call cond_resched() directly. This has the same effect. Signed-off-by: Eric Biggers --- crypto/skcipher.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 6b62d816f08d..a9eb2dcf2898 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -144,12 +144,12 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res) scatterwalk_advance(&walk->out, n); scatterwalk_done(&walk->in, 0, total); scatterwalk_done(&walk->out, 1, total); if (total) { - crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ? - CRYPTO_TFM_REQ_MAY_SLEEP : 0); + if (walk->flags & SKCIPHER_WALK_SLEEP) + cond_resched(); walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | SKCIPHER_WALK_DIFF); return skcipher_walk_next(walk); }