mbox series

[v7,00/12] x86: Support Key Locker

Message ID 20230524165717.14062-1-chang.seok.bae@intel.com
Headers show
Series x86: Support Key Locker | expand

Message

Chang S. Bae May 24, 2023, 4:57 p.m. UTC
Hi all,

The previous posting [v6] was intended to deliver the hardware
performance update primarily by revamping the old series. Then,
multiple feedbacks were received. This revision thus incorporates the
changes to address them. So, the primary goal here is to meet those
reviewers' expectations at first.

The series has two parts of changes -- the x86 core and its crypto
library. In this revision, much more changes were made on the latter.
Also, overall changelogs were revisited to be more reviewable.

Here is the overview of those feedbacks/changes (skip some trivial):

Part 1. Crypto Library (PATCH10-12):

  PATCH12: AES-KL driver
  - Merge all the AES-KL patches. (Eric Biggers)
  - Make the driver for the 64-bit mode only. (Eric Biggers)
  - Rework the key-size check code (Eric Biggers)
  - Adjust the Kconfig change (Robert Elliott)
  - Update the changelog. (Eric Biggers)
  - Adjust the ASM code to return a proper error code. (Eric Biggers)

  PATCH11: AES-NI rework
  - Inline the helper code to avoid the indirect call. (Eric Biggers)
  - Rename the filename: aes-intel* -> aes-helper*. (Eric Biggers)
  - Don't export symbols yet here. Instead, do it when needed later.
  - Improve the coding style (Eric Biggers)

  PATCH10: the AESNI-XTS field type fix
  - Add as a new patch. (Eric Biggers)

Part 2. The X86 Core (PATCH1-9):

  PATCH09: a chicken bit and a Kconfig option
  - Rebase on the upstream: commit a894a8a56b57 ("Documentation:
    kernel-parameters: sort all "no..." parameters")

  PATCH08: the wrapping key recovery
  - Limit the symbol export only when needed.

  PATCH07: the wrapping key load at boot-time
  - Use memzero_explicit() instead of memset(). (Robert Elliott)
  - Improve the function prototype. (Eric Biggers and Dave Hansen)

  PATCH01: documentation
  - Rebase on the upstream -- commit ff61f0791ce9 ("docs: move x86
    documentation into Documentation/arch/"). (Nathan Huckleberry)

This version can be also found here:
    git://github.com/intel-staging/keylocker.git kl-v7

[v6] -- the last posting:
    https://lore.kernel.org/lkml/20230410225936.8940-1-chang.seok.bae@intel.com/

Thanks,
Chang

Chang S. Bae (12):
  Documentation/x86: Document Key Locker
  x86/cpufeature: Enumerate Key Locker feature
  x86/insn: Add Key Locker instructions to the opcode map
  x86/asm: Add a wrapper function for the LOADIWKEY instruction
  x86/msr-index: Add MSRs for Key Locker wrapping key
  x86/keylocker: Define Key Locker CPUID leaf
  x86/cpu/keylocker: Load a wrapping key at boot-time
  x86/PM/keylocker: Restore the wrapping key on the resume from ACPI
    S3/4
  x86/cpu: Add a configuration and command line option for Key Locker
  crypto: x86/aesni - Use the proper data type in struct aesni_xts_ctx
  crypto: x86/aes - Prepare for a new AES implementation
  crypto: x86/aes-kl - Implement the AES-XTS algorithm

 .../admin-guide/kernel-parameters.txt         |   2 +
 Documentation/arch/x86/index.rst              |   1 +
 Documentation/arch/x86/keylocker.rst          |  97 +++
 arch/x86/Kconfig                              |   3 +
 arch/x86/crypto/Kconfig                       |  22 +
 arch/x86/crypto/Makefile                      |   3 +
 arch/x86/crypto/aes-helper_asm.S              |  22 +
 arch/x86/crypto/aes-helper_glue.h             | 161 +++++
 arch/x86/crypto/aeskl-intel_asm.S             | 580 ++++++++++++++++++
 arch/x86/crypto/aeskl-intel_glue.c            | 216 +++++++
 arch/x86/crypto/aesni-intel_asm.S             |  55 +-
 arch/x86/crypto/aesni-intel_glue.c            | 242 +++-----
 arch/x86/crypto/aesni-intel_glue.h            |  17 +
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/disabled-features.h      |   8 +-
 arch/x86/include/asm/keylocker.h              |  45 ++
 arch/x86/include/asm/msr-index.h              |   6 +
 arch/x86/include/asm/special_insns.h          |  32 +
 arch/x86/include/uapi/asm/processor-flags.h   |   2 +
 arch/x86/kernel/Makefile                      |   1 +
 arch/x86/kernel/cpu/common.c                  |  21 +-
 arch/x86/kernel/cpu/cpuid-deps.c              |   1 +
 arch/x86/kernel/keylocker.c                   | 212 +++++++
 arch/x86/kernel/smpboot.c                     |   2 +
 arch/x86/lib/x86-opcode-map.txt               |  11 +-
 arch/x86/power/cpu.c                          |   2 +
 tools/arch/x86/lib/x86-opcode-map.txt         |  11 +-
 27 files changed, 1573 insertions(+), 203 deletions(-)
 create mode 100644 Documentation/arch/x86/keylocker.rst
 create mode 100644 arch/x86/crypto/aes-helper_asm.S
 create mode 100644 arch/x86/crypto/aes-helper_glue.h
 create mode 100644 arch/x86/crypto/aeskl-intel_asm.S
 create mode 100644 arch/x86/crypto/aeskl-intel_glue.c
 create mode 100644 arch/x86/crypto/aesni-intel_glue.h
 create mode 100644 arch/x86/include/asm/keylocker.h
 create mode 100644 arch/x86/kernel/keylocker.c


base-commit: 3a128547bd4425cdef27c606efc88e1eb03a2dba

Comments

Borislav Petkov June 3, 2023, 4:37 p.m. UTC | #1
On Sat, Jun 03, 2023 at 08:22:24AM -0700, Chang S. Bae wrote:
> +static __init int x86_nokeylocker_setup(char *arg)
> +{
> +	/* Expect an exact match without trailing characters. */
> +	if (strlen(arg))
> +		return 0;
> +
> +	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
> +		return 1;
> +
> +	setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER);
> +	pr_info("x86/keylocker: Disabled by kernel command line.\n");
> +	return 1;
> +}
> +__setup("nokeylocker", x86_nokeylocker_setup);

Can we stop adding those just to remove them at some point later but
simply do:

clearcpuid=keylocker

?
Eric Biggers June 4, 2023, 3:34 p.m. UTC | #2
On Sat, Jun 03, 2023 at 08:22:25AM -0700, Chang S. Bae wrote:
> Every field in struct aesni_xts_ctx is a pointer to a byte array.

A byte array.  Not a pointer to a byte array.

> Then, the field can be redefined as that struct type instead of the obscure
> pointer.

There's no pointer.

>  static inline struct
>  aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
>  {
> -	unsigned long align = AESNI_ALIGN;
> -
> -	if (align <= crypto_tfm_ctx_alignment())
> -		align = 1;
> -	return PTR_ALIGN(crypto_aead_ctx(tfm), align);
> +	return (struct aesni_rfc4106_gcm_ctx *)aes_align_addr(crypto_aead_ctx(tfm));
>  }

Explicit casts from 'void *' are unnecessary.

>  static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
>  			    unsigned int keylen)
>  {
> -	struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
> +	struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
>  	int err;
>  
>  	err = xts_verify_key(tfm, key, keylen);
> @@ -893,20 +892,20 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
>  	keylen /= 2;
>  
>  	/* first half of xts-key is for crypt */
> -	err = aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_crypt_ctx,
> +	err = aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->crypt_ctx,
>  				 key, keylen);
>  	if (err)
>  		return err;
>  
>  	/* second half of xts-key is for tweak */
> -	return aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_tweak_ctx,
> +	return aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->tweak_ctx,
>  				  key + keylen, keylen);
>  }

To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary
should happen when translating the raw crypto_skcipher_ctx() into the pointer to
the aes_xts_ctx.  It should not happen when accessing each individual field in
the aes_xts_ctx.

Yet, this code is still doing runtime alignment when accessing each individual
field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which
aes_set_key_common() runtime-aligns to crypto_aes_ctx.

We should keep everything consistent, which means making aes_set_key_common()
take a pointer to crypto_aes_ctx and not do the runtime alignment.

- Eric
Chang S. Bae June 4, 2023, 10:02 p.m. UTC | #3
On 6/4/2023 8:34 AM, Eric Biggers wrote:
> 
> To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary
> should happen when translating the raw crypto_skcipher_ctx() into the pointer to
> the aes_xts_ctx.  It should not happen when accessing each individual field in
> the aes_xts_ctx.
> 
> Yet, this code is still doing runtime alignment when accessing each individual
> field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which
> aes_set_key_common() runtime-aligns to crypto_aes_ctx.
> 
> We should keep everything consistent, which means making aes_set_key_common()
> take a pointer to crypto_aes_ctx and not do the runtime alignment.

Let me clarify what is the problem this patch tried to solve here. The 
current struct aesni_xts_ctx is ugly. So, the main story is let's fix it 
before using the code for AES-KL.

Then, the rework part may be applicable for code re-usability. That 
seems to be okay to do here.

Fixing the runtime alignment entirely seems to be touching other code 
than AES-XTS. Yes, that's ideal cleanup for consistency. But, it seems 
to be less relevant in this series.  I'd be happy to follow up on that 
improvement though.

Thanks,
Chang
Chang S. Bae June 4, 2023, 10:13 p.m. UTC | #4
On 6/3/2023 9:37 AM, Borislav Petkov wrote:
> On Sat, Jun 03, 2023 at 08:22:24AM -0700, Chang S. Bae wrote:
>> +static __init int x86_nokeylocker_setup(char *arg)
>> +{
>> +	/* Expect an exact match without trailing characters. */
>> +	if (strlen(arg))
>> +		return 0;
>> +
>> +	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
>> +		return 1;
>> +
>> +	setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER);
>> +	pr_info("x86/keylocker: Disabled by kernel command line.\n");
>> +	return 1;
>> +}
>> +__setup("nokeylocker", x86_nokeylocker_setup);
> 
> Can we stop adding those just to remove them at some point later but
> simply do:
> 
> clearcpuid=keylocker
> 
> ?

Oh, I was not sure about this policy. Thanks, now I'm glad that I have 
confidence in removing this.

Chang
Eric Biggers June 5, 2023, 2:46 a.m. UTC | #5
On Sun, Jun 04, 2023 at 03:02:32PM -0700, Chang S. Bae wrote:
> On 6/4/2023 8:34 AM, Eric Biggers wrote:
> > 
> > To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary
> > should happen when translating the raw crypto_skcipher_ctx() into the pointer to
> > the aes_xts_ctx.  It should not happen when accessing each individual field in
> > the aes_xts_ctx.
> > 
> > Yet, this code is still doing runtime alignment when accessing each individual
> > field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which
> > aes_set_key_common() runtime-aligns to crypto_aes_ctx.
> > 
> > We should keep everything consistent, which means making aes_set_key_common()
> > take a pointer to crypto_aes_ctx and not do the runtime alignment.
> 
> Let me clarify what is the problem this patch tried to solve here. The
> current struct aesni_xts_ctx is ugly. So, the main story is let's fix it
> before using the code for AES-KL.
> 
> Then, the rework part may be applicable for code re-usability. That seems to
> be okay to do here.
> 
> Fixing the runtime alignment entirely seems to be touching other code than
> AES-XTS. Yes, that's ideal cleanup for consistency. But, it seems to be less
> relevant in this series.  I'd be happy to follow up on that improvement
> though.

IMO the issue is that your patch makes the code (including the XTS code)
inconsistent because it makes it use a mix of both approaches: it aligns each
field individually, *and* it aligns the ctx up-front.  I was hoping to switch
fully from the former approach to the latter approach, instead of switching from
the former approach to a mix of the two approaches as you are proposing.

The following on top of this patch is what I am asking for.  I think it would be
appropriate to fold into this patch.

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 589648142c173..ad1ae7a88b59d 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -228,10 +228,10 @@ static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
 	return (struct aesni_xts_ctx *)aes_align_addr(crypto_skcipher_ctx(tfm));
 }
 
-static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx,
+static int aes_set_key_common(struct crypto_tfm *tfm,
+			      struct crypto_aes_ctx *ctx,
 			      const u8 *in_key, unsigned int key_len)
 {
-	struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
 	int err;
 
 	if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 &&
@@ -252,7 +252,8 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx,
 static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
 		       unsigned int key_len)
 {
-	return aes_set_key_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len);
+	return aes_set_key_common(tfm, aes_ctx(crypto_tfm_ctx(tfm)),
+				  in_key, key_len);
 }
 
 static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
@@ -285,7 +286,7 @@ static int aesni_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
 			         unsigned int len)
 {
 	return aes_set_key_common(crypto_skcipher_tfm(tfm),
-				  crypto_skcipher_ctx(tfm), key, len);
+				  aes_ctx(crypto_skcipher_ctx(tfm)), key, len);
 }
 
 static int ecb_encrypt(struct skcipher_request *req)
Chang S. Bae June 5, 2023, 4:41 a.m. UTC | #6
On 6/4/2023 7:46 PM, Eric Biggers wrote:
> 
> IMO the issue is that your patch makes the code (including the XTS code)
> inconsistent because it makes it use a mix of both approaches: it aligns each
> field individually, *and* it aligns the ctx up-front. 

But, the code did this already:

common_rfc4106_set_key()
-> aesni_rfc4106_gcm_ctx_get(aead) 	// align ->ctx
-> aes_set_key_common()
    -> aes_ctx()				// here, this aligns again

And generic_gcmaes_set_key() as well.

Hmm, maybe a separate patch can spell out this issue more clearly for 
the record, no?

Thanks,
Chang