diff mbox series

crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair

Message ID 20180727125915.20326-1-ard.biesheuvel@linaro.org
State Accepted
Commit c7513c2a2714204d3588ecaa170ae628fd0d217e
Headers show
Series crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair | expand

Commit Message

Ard Biesheuvel July 27, 2018, 12:59 p.m. UTC
Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and
kernel_neon_end() to be used since the routine touches the NEON
register file. Add the missing calls.

Also, since NEON register contents are not preserved outside of
a kernel mode NEON region, pass the key schedule array again.

Fixes: 7c50136a8aba ("crypto: arm64/aes-ghash - yield NEON after every ...")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 arch/arm64/crypto/ghash-ce-glue.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

-- 
2.18.0

Comments

Ard Biesheuvel July 31, 2018, 7:22 a.m. UTC | #1
(+ Catalin, Will)

On 27 July 2018 at 14:59, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and

> kernel_neon_end() to be used since the routine touches the NEON

> register file. Add the missing calls.

>

> Also, since NEON register contents are not preserved outside of

> a kernel mode NEON region, pass the key schedule array again.

>

> Fixes: 7c50136a8aba ("crypto: arm64/aes-ghash - yield NEON after every ...")

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  arch/arm64/crypto/ghash-ce-glue.c | 8 ++++++--

>  1 file changed, 6 insertions(+), 2 deletions(-)

>

> diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c

> index 7cf0b1aa6ea8..8a10f1d7199a 100644

> --- a/arch/arm64/crypto/ghash-ce-glue.c

> +++ b/arch/arm64/crypto/ghash-ce-glue.c

> @@ -488,9 +488,13 @@ static int gcm_decrypt(struct aead_request *req)

>                         err = skcipher_walk_done(&walk,

>                                                  walk.nbytes % AES_BLOCK_SIZE);

>                 }

> -               if (walk.nbytes)

> -                       pmull_gcm_encrypt_block(iv, iv, NULL,

> +               if (walk.nbytes) {

> +                       kernel_neon_begin();

> +                       pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc,

>                                                 num_rounds(&ctx->aes_key));

> +                       kernel_neon_end();

> +               }

> +

>         } else {

>                 __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv,

>                                     num_rounds(&ctx->aes_key));

> --

> 2.18.0

>


This fixes a rather nasty bug in the AES-GCM code: failing to call
kernel_neon_begin()/_end() may clobber the NEON register state of
unrelated userland processes.

Could we please get this queued before v4.18 is released? Thanks.
Will Deacon July 31, 2018, 8:47 a.m. UTC | #2
On Tue, Jul 31, 2018 at 09:22:52AM +0200, Ard Biesheuvel wrote:
> (+ Catalin, Will)

> 

> On 27 July 2018 at 14:59, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> > Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and

> > kernel_neon_end() to be used since the routine touches the NEON

> > register file. Add the missing calls.

> >

> > Also, since NEON register contents are not preserved outside of

> > a kernel mode NEON region, pass the key schedule array again.

> >

> > Fixes: 7c50136a8aba ("crypto: arm64/aes-ghash - yield NEON after every ...")

> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> > ---

> >  arch/arm64/crypto/ghash-ce-glue.c | 8 ++++++--

> >  1 file changed, 6 insertions(+), 2 deletions(-)

> >

> > diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c

> > index 7cf0b1aa6ea8..8a10f1d7199a 100644

> > --- a/arch/arm64/crypto/ghash-ce-glue.c

> > +++ b/arch/arm64/crypto/ghash-ce-glue.c

> > @@ -488,9 +488,13 @@ static int gcm_decrypt(struct aead_request *req)

> >                         err = skcipher_walk_done(&walk,

> >                                                  walk.nbytes % AES_BLOCK_SIZE);

> >                 }

> > -               if (walk.nbytes)

> > -                       pmull_gcm_encrypt_block(iv, iv, NULL,

> > +               if (walk.nbytes) {

> > +                       kernel_neon_begin();

> > +                       pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc,

> >                                                 num_rounds(&ctx->aes_key));

> > +                       kernel_neon_end();

> > +               }

> > +

> >         } else {

> >                 __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv,

> >                                     num_rounds(&ctx->aes_key));

> > --

> > 2.18.0

> >

> 

> This fixes a rather nasty bug in the AES-GCM code: failing to call

> kernel_neon_begin()/_end() may clobber the NEON register state of

> unrelated userland processes.

> 

> Could we please get this queued before v4.18 is released? Thanks.


I can take this via the arm64 tree if Herbert is ok with that.

Herbert?

Will
Herbert Xu July 31, 2018, 11:01 a.m. UTC | #3
On Tue, Jul 31, 2018 at 09:47:28AM +0100, Will Deacon wrote:
> On Tue, Jul 31, 2018 at 09:22:52AM +0200, Ard Biesheuvel wrote:

> > (+ Catalin, Will)

> > 

> > On 27 July 2018 at 14:59, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> > > Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and

> > > kernel_neon_end() to be used since the routine touches the NEON

> > > register file. Add the missing calls.

> > >

> > > Also, since NEON register contents are not preserved outside of

> > > a kernel mode NEON region, pass the key schedule array again.

> > >

> > > Fixes: 7c50136a8aba ("crypto: arm64/aes-ghash - yield NEON after every ...")

> > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> > > ---

> > >  arch/arm64/crypto/ghash-ce-glue.c | 8 ++++++--

> > >  1 file changed, 6 insertions(+), 2 deletions(-)

> > >

> > > diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c

> > > index 7cf0b1aa6ea8..8a10f1d7199a 100644

> > > --- a/arch/arm64/crypto/ghash-ce-glue.c

> > > +++ b/arch/arm64/crypto/ghash-ce-glue.c

> > > @@ -488,9 +488,13 @@ static int gcm_decrypt(struct aead_request *req)

> > >                         err = skcipher_walk_done(&walk,

> > >                                                  walk.nbytes % AES_BLOCK_SIZE);

> > >                 }

> > > -               if (walk.nbytes)

> > > -                       pmull_gcm_encrypt_block(iv, iv, NULL,

> > > +               if (walk.nbytes) {

> > > +                       kernel_neon_begin();

> > > +                       pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc,

> > >                                                 num_rounds(&ctx->aes_key));

> > > +                       kernel_neon_end();

> > > +               }

> > > +

> > >         } else {

> > >                 __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv,

> > >                                     num_rounds(&ctx->aes_key));

> > > --

> > > 2.18.0

> > >

> > 

> > This fixes a rather nasty bug in the AES-GCM code: failing to call

> > kernel_neon_begin()/_end() may clobber the NEON register state of

> > unrelated userland processes.

> > 

> > Could we please get this queued before v4.18 is released? Thanks.

> 

> I can take this via the arm64 tree if Herbert is ok with that.


Sure:

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>


Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
diff mbox series

Patch

diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
index 7cf0b1aa6ea8..8a10f1d7199a 100644
--- a/arch/arm64/crypto/ghash-ce-glue.c
+++ b/arch/arm64/crypto/ghash-ce-glue.c
@@ -488,9 +488,13 @@  static int gcm_decrypt(struct aead_request *req)
 			err = skcipher_walk_done(&walk,
 						 walk.nbytes % AES_BLOCK_SIZE);
 		}
-		if (walk.nbytes)
-			pmull_gcm_encrypt_block(iv, iv, NULL,
+		if (walk.nbytes) {
+			kernel_neon_begin();
+			pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc,
 						num_rounds(&ctx->aes_key));
+			kernel_neon_end();
+		}
+
 	} else {
 		__aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv,
 				    num_rounds(&ctx->aes_key));