diff mbox series

[07/29] crypto: skcipher - optimize initializing skcipher_walk fields

Message ID 20241221091056.282098-8-ebiggers@kernel.org
State Superseded
Headers show
Series crypto: scatterlist handling improvements | expand

Commit Message

Eric Biggers Dec. 21, 2024, 9:10 a.m. UTC
From: Eric Biggers <ebiggers@google.com>

The helper functions like crypto_skcipher_blocksize() take in a pointer
to a tfm object, but they actually return properties of the algorithm.
As the Linux kernel is compiled with -fno-strict-aliasing, the compiler
has to assume that the writes to struct skcipher_walk could clobber the
tfm's pointer to its algorithm.  Thus it gets repeatedly reloaded in the
generated code.  Therefore, replace the use of these helper functions
with staightforward accesses to the struct fields.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/skcipher.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Eric Biggers Dec. 29, 2024, 10:10 p.m. UTC | #1
On Sat, Dec 21, 2024 at 07:16:07PM +0800, Herbert Xu wrote:
> Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > @@ -326,13 +326,13 @@ int skcipher_walk_virt(struct skcipher_walk *walk,
> >                return 0;
> > 
> >        scatterwalk_start(&walk->in, req->src);
> >        scatterwalk_start(&walk->out, req->dst);
> > 
> > -       walk->blocksize = crypto_skcipher_blocksize(tfm);
> > -       walk->ivsize = crypto_skcipher_ivsize(tfm);
> > -       walk->alignmask = crypto_skcipher_alignmask(tfm);
> > +       walk->blocksize = alg->base.cra_blocksize;
> > +       walk->ivsize = alg->co.ivsize;
> > +       walk->alignmask = alg->base.cra_alignmask;
> 
> Please do this instead:
> 
> 	unsigned bs, ivs, am;
> 
> 	bs = crypto_skcipher_blocksize(tfm);
> 	ivs = crypto_skcipher_ivsize(tfm);
> 	am = crypto_skcipher_alignmask(tfm);
> 	walk->blocksize = bs;
> 	walk->ivsize = ivs;
> 	walk->alignmask = am;
> 
> This generates the right thing for me with gcc12.
> 

This seems strictly worse than my version, so I don't plan to do this.  It's
more lines of code, and it causes an extra push and pop to be needed in
skcipher_walk_virt() to free up enough registers to hold all values at once.  It
may be intended that API users are supposed to use the helper functions instead
of accessing the algorithm struct directly, but this code is not a user; it's
part of the API implementation in crypto/skcipher.c.  There are already lots of
other direct accesses to the algorithm struct in the same file, and even another
in skcipher_walk_virt() already.  The helper functions are pointless in this
context and just cause problems like the one this patch is fixing.

- Eric
Herbert Xu Jan. 2, 2025, 1:03 a.m. UTC | #2
On Sun, Dec 29, 2024 at 02:10:11PM -0800, Eric Biggers wrote:
>
> This seems strictly worse than my version, so I don't plan to do this.  It's
> more lines of code, and it causes an extra push and pop to be needed in
> skcipher_walk_virt() to free up enough registers to hold all values at once.  It
> may be intended that API users are supposed to use the helper functions instead
> of accessing the algorithm struct directly, but this code is not a user; it's
> part of the API implementation in crypto/skcipher.c.  There are already lots of
> other direct accesses to the algorithm struct in the same file, and even another
> in skcipher_walk_virt() already.  The helper functions are pointless in this
> context and just cause problems like the one this patch is fixing.

OK, but please add a comment where the accesses occur so that
people won't try to turn this back.

Thanks,
diff mbox series

Patch

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index e54d1ad46566..7ef2e4ddf07a 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -306,12 +306,12 @@  static int skcipher_walk_first(struct skcipher_walk *walk)
 }
 
 int skcipher_walk_virt(struct skcipher_walk *walk,
 		       struct skcipher_request *req, bool atomic)
 {
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+	const struct skcipher_alg *alg =
+		crypto_skcipher_alg(crypto_skcipher_reqtfm(req));
 
 	might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
 
 	walk->total = req->cryptlen;
 	walk->nbytes = 0;
@@ -326,13 +326,13 @@  int skcipher_walk_virt(struct skcipher_walk *walk,
 		return 0;
 
 	scatterwalk_start(&walk->in, req->src);
 	scatterwalk_start(&walk->out, req->dst);
 
-	walk->blocksize = crypto_skcipher_blocksize(tfm);
-	walk->ivsize = crypto_skcipher_ivsize(tfm);
-	walk->alignmask = crypto_skcipher_alignmask(tfm);
+	walk->blocksize = alg->base.cra_blocksize;
+	walk->ivsize = alg->co.ivsize;
+	walk->alignmask = alg->base.cra_alignmask;
 
 	if (alg->co.base.cra_type != &crypto_skcipher_type)
 		walk->stride = alg->co.chunksize;
 	else
 		walk->stride = alg->walksize;
@@ -342,11 +342,11 @@  int skcipher_walk_virt(struct skcipher_walk *walk,
 EXPORT_SYMBOL_GPL(skcipher_walk_virt);
 
 static int skcipher_walk_aead_common(struct skcipher_walk *walk,
 				     struct aead_request *req, bool atomic)
 {
-	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	const struct aead_alg *alg = crypto_aead_alg(crypto_aead_reqtfm(req));
 
 	walk->nbytes = 0;
 	walk->iv = req->iv;
 	walk->oiv = req->iv;
 	if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic)
@@ -364,14 +364,14 @@  static int skcipher_walk_aead_common(struct skcipher_walk *walk,
 	scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2);
 
 	scatterwalk_done(&walk->in, 0, walk->total);
 	scatterwalk_done(&walk->out, 0, walk->total);
 
-	walk->blocksize = crypto_aead_blocksize(tfm);
-	walk->stride = crypto_aead_chunksize(tfm);
-	walk->ivsize = crypto_aead_ivsize(tfm);
-	walk->alignmask = crypto_aead_alignmask(tfm);
+	walk->blocksize = alg->base.cra_blocksize;
+	walk->stride = alg->chunksize;
+	walk->ivsize = alg->ivsize;
+	walk->alignmask = alg->base.cra_alignmask;
 
 	return skcipher_walk_first(walk);
 }
 
 int skcipher_walk_aead_encrypt(struct skcipher_walk *walk,