mbox series

[v3,00/11] Encrypted Hibernation

Message ID 20220927164922.3383711-1-evgreen@chromium.org
Headers show
Series Encrypted Hibernation | expand

Message

Evan Green Sept. 27, 2022, 4:49 p.m. UTC
We are exploring enabling hibernation in some new scenarios. However,
our security team has a few requirements, listed below:
1. The hibernate image must be encrypted with protection derived from
   both the platform (eg TPM) and user authentication data (eg
   password).
2. Hibernation must not be a vector by which a malicious userspace can
   escalate to the kernel.

Requirement #1 can be achieved solely with uswsusp, however requirement
2 necessitates mechanisms in the kernel to guarantee integrity of the
hibernate image. The kernel needs a way to authenticate that it generated
the hibernate image being loaded, and that the image has not been tampered
with. Adding support for in-kernel AEAD encryption with a TPM-sealed key
allows us to achieve both requirements with a single computation pass.

Matthew Garrett published a series [1] that aligns closely with this
goal. His series utilized the fact that PCR23 is a resettable PCR that
can be blocked from access by usermode. The TPM can create a sealed key
tied to PCR23 in two ways. First, the TPM can attest to the value of
PCR23 when the key was created, which the kernel can use on resume to
verify that the kernel must have created the key (since it is the only
one capable of modifying PCR23). It can also create a policy that enforces
PCR23 be set to a specific value as a condition of unsealing the key,
preventing usermode from unsealing the key by talking directly to the
TPM.

This series adopts that primitive as a foundation, tweaking and building
on it a bit. Where Matthew's series used the TPM-backed key to encrypt a
hash of the image, this series uses the key directly as a gcm(aes)
encryption key, which the kernel uses to encrypt and decrypt the
hibernate image in chunks of 16 pages. This provides both encryption and
integrity, which turns out to be a noticeable performance improvement over
separate passes for encryption and hashing.

The series also introduces the concept of mixing user key material into
the encryption key. This allows usermode to introduce key material
based on unspecified external authentication data (in our case derived
from something like the user password or PIN), without requiring
usermode to do a separate encryption pass.

Matthew also documented issues his series had [2] related to generating
fake images by booting alternate kernels without the PCR23 limiting.
With access to PCR23 on the same machine, usermode can create fake
hibernate images that are indistinguishable to the new kernel from
genuine ones. His post outlines a solution that involves adding more
PCRs into the creation data and policy, with some gyrations to make this
work well on a standard PC.

Our approach would be similar: on our machines PCR 0 indicates whether
the system is booted in secure/verified mode or developer mode. By
adding PCR0 to the policy, we can reject hibernate images made in
developer mode while in verified mode (or vice versa).

Additionally, mixing in the user authentication data limits both
data exfiltration attacks (eg a stolen laptop) and forged hibernation
image attacks to attackers that already know the authentication data (eg
user's password). This, combined with our relatively sealed userspace
(dm-verity on the rootfs), and some judicious clearing of the hibernate
image (such as across an OS update) further reduce the risk of an online
attack. The remaining attack space of a forgery from someone with
physical access to the device and knowledge of the authentication data
is out of scope for us, given that flipping to developer mode or
reflashing RO firmware trivially achieves the same thing.

A couple of patches still need to be written on top of this series. The
generalized functionality to OR in additional PCRs via Kconfig (like PCR
0 or 5) still needs to be added. We'll also need a patch that disallows
unencrypted forms of resume from hibernation, to fully close the door
to malicious userspace. However, I wanted to get this series out first
and get reactions from upstream before continuing to add to it.

[1] https://patchwork.kernel.org/project/linux-pm/cover/20210220013255.1083202-1-matthewgarrett@google.com/
[2] https://mjg59.dreamwidth.org/58077.html

Changes in v3:
 - Unify tpm1/2_pcr_reset prototypes (Jarkko)
 - Wait no, remove the TPM1 stuff altogether (Jarkko)
 - Remove extra From tag and blank in commit msg (Jarkko).
 - Split find_and_validate_cc() export to its own patch (Jarkko)
 - Rename tpm_find_and_validate_cc() to tpm2_find_and_validate_cc().
 - Fix up commit message (Jarkko)
 - tpm2_find_and_validate_cc() was split (Jarkko)
 - Simply fully restrict TPM1 since v2 failed to account for tunnelled
   transport sessions (Stefan and Jarkko).
 - Fix SoB and -- note ordering (Kees)
 - Add comments describing the TPM2 spec type names for the new fields
   in tpm2key.asn1 (Kees)
 - Add len buffer checks in tpm2_key_encode() (Kees)
 - Clarified creationpcrs documentation (Ben)
 - Changed funky tag to suggested-by (Kees). Matthew, holler if you want
   something different.
 - ENCRYPTED_HIBERNATION needs TRUSTED_KEYS builtin for
   key_type_trusted.
 - Remove KEYS dependency since it's covered by TRUSTED_KEYS (Kees)
 - Changed funky tag to Co-developed-by (Kees). Matthew, holler if you
   want something different.
 - Changed funky tag to Co-developed-by (Kees)

Changes in v2:
 - Fixed sparse warnings
 - Adjust hash len by 2 due to new ASN.1 storage, and add underflow
   check.
 - Rework load/create_kernel_key() to eliminate a label (Andrey)
 - Call put_device() needed from calling tpm_default_chip().
 - Add missing static on snapshot_encrypted_byte_count()
 - Fold in only the used kernel key bytes to the user key.
 - Make the user key length 32 (Eric)
 - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
 - Fixed some sparse warnings
 - Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric)
 - Adjusted offsets due to new ASN.1 format, and added a creation data
   length check.
 - Fix sparse warnings
 - Fix session type comment (Andrey)
 - Eliminate extra label in get/create_kernel_key() (Andrey)
 - Call tpm_try_get_ops() before calling tpm2_flush_context().

Evan Green (8):
  tpm: Export and rename tpm2_find_and_validate_cc()
  security: keys: trusted: Include TPM2 creation data
  security: keys: trusted: Verify creation data
  PM: hibernate: Add kernel-based encryption
  PM: hibernate: Use TPM-backed keys to encrypt image
  PM: hibernate: Mix user key in encrypted hibernate
  PM: hibernate: Verify the digest encryption key
  PM: hibernate: seal the encryption key with a PCR policy

Matthew Garrett (3):
  tpm: Add support for in-kernel resetting of PCRs
  tpm: Allow PCR 23 to be restricted to kernel-only use
  security: keys: trusted: Allow storage of PCR values in creation data

 Documentation/power/userland-swsusp.rst       |    8 +
 .../security/keys/trusted-encrypted.rst       |    6 +
 drivers/char/tpm/Kconfig                      |   12 +
 drivers/char/tpm/tpm-dev-common.c             |    8 +
 drivers/char/tpm/tpm-interface.c              |   25 +
 drivers/char/tpm/tpm.h                        |   23 +
 drivers/char/tpm/tpm1-cmd.c                   |   13 +
 drivers/char/tpm/tpm2-cmd.c                   |   58 +
 drivers/char/tpm/tpm2-space.c                 |    8 +-
 include/keys/trusted-type.h                   |    9 +
 include/linux/tpm.h                           |   12 +
 include/uapi/linux/suspend_ioctls.h           |   28 +-
 kernel/power/Kconfig                          |   15 +
 kernel/power/Makefile                         |    1 +
 kernel/power/power.h                          |    1 +
 kernel/power/snapenc.c                        | 1037 +++++++++++++++++
 kernel/power/snapshot.c                       |    5 +
 kernel/power/user.c                           |   44 +-
 kernel/power/user.h                           |  114 ++
 security/keys/trusted-keys/tpm2key.asn1       |   15 +-
 security/keys/trusted-keys/trusted_tpm1.c     |    9 +
 security/keys/trusted-keys/trusted_tpm2.c     |  318 ++++-
 22 files changed, 1724 insertions(+), 45 deletions(-)
 create mode 100644 kernel/power/snapenc.c
 create mode 100644 kernel/power/user.h

Comments

Ben Boeckel Sept. 27, 2022, 4:58 p.m. UTC | #1
On Tue, Sep 27, 2022 at 09:49:16 -0700, Evan Green wrote:
> From: Matthew Garrett <matthewgarrett@google.com>
> 
> When TPMs generate keys, they can also generate some information
> describing the state of the PCRs at creation time. This data can then
> later be certified by the TPM, allowing verification of the PCR values.
> This allows us to determine the state of the system at the time a key
> was generated. Add an additional argument to the trusted key creation
> options, allowing the user to provide the set of PCRs that should have
> their values incorporated into the creation data.
> 
> Link: https://lore.kernel.org/lkml/20210220013255.1083202-6-matthewgarrett@google.com/
> Signed-off-by: Matthew Garrett <mjg59@google.com>
> Signed-off-by: Evan Green <evgreen@chromium.org>
> ---

Reviewed-by: Ben Boeckel <linux@me.benboeckel.net>

Thanks!

--Ben
Jarkko Sakkinen Sept. 30, 2022, 8:57 p.m. UTC | #2
On Tue, Sep 27, 2022 at 09:49:14AM -0700, Evan Green wrote:
> From: Matthew Garrett <matthewgarrett@google.com>
> 
> Under certain circumstances it might be desirable to enable the creation
> of TPM-backed secrets that are only accessible to the kernel. In an
> ideal world this could be achieved by using TPM localities, but these
> don't appear to be available on consumer systems. An alternative is to
> simply block userland from modifying one of the resettable PCRs, leaving
> it available to the kernel. If the kernel ensures that no userland can
> access the TPM while it is carrying out work, it can reset PCR 23,
> extend it to an arbitrary value, create or load a secret, and then reset
> the PCR again. Even if userland somehow obtains the sealed material, it
> will be unable to unseal it since PCR 23 will never be in the
> appropriate state.

This lacks any sort of description what the patch does in concrete. The
most critical thing it lacks is the addition of a new config flag, which
really should documented. It e.g. helps when searching with git log, once
this is in the mainline.

The current contents is a perfect "motivation" part.

> 
> Link: https://lore.kernel.org/lkml/20210220013255.1083202-3-matthewgarrett@google.com/
> Signed-off-by: Matthew Garrett <mjg59@google.com>
> Signed-off-by: Evan Green <evgreen@chromium.org>
> ---
> 
> Changes in v3:
>  - Fix up commit message (Jarkko)
>  - tpm2_find_and_validate_cc() was split (Jarkko)
>  - Simply fully restrict TPM1 since v2 failed to account for tunnelled
>    transport sessions (Stefan and Jarkko).
> 
> Changes in v2:
>  - Fixed sparse warnings
> 
>  drivers/char/tpm/Kconfig          | 12 ++++++++++++
>  drivers/char/tpm/tpm-dev-common.c |  8 ++++++++
>  drivers/char/tpm/tpm.h            | 19 +++++++++++++++++++
>  drivers/char/tpm/tpm1-cmd.c       | 13 +++++++++++++
>  drivers/char/tpm/tpm2-cmd.c       | 22 ++++++++++++++++++++++
>  5 files changed, 74 insertions(+)
> 
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 927088b2c3d3f2..c8ed54c66e399a 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -211,4 +211,16 @@ config TCG_FTPM_TEE
>  	  This driver proxies for firmware TPM running in TEE.
>  
>  source "drivers/char/tpm/st33zp24/Kconfig"
> +
> +config TCG_TPM_RESTRICT_PCR
> +	bool "Restrict userland access to PCR 23"
> +	depends on TCG_TPM
> +	help
> +	  If set, block userland from extending or resetting PCR 23. This allows it
> +	  to be restricted to in-kernel use, preventing userland from being able to
> +	  make use of data sealed to the TPM by the kernel. This is required for
> +	  secure hibernation support, but should be left disabled if any userland
> +	  may require access to PCR23. This is a TPM2-only feature, and if enabled
> +	  on a TPM1 machine will cause all usermode TPM commands to return EPERM due
> +	  to the complications introduced by tunnelled sessions in TPM1.2.
>  endif # TCG_TPM
> diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
> index dc4c0a0a512903..7a4e618c7d1942 100644
> --- a/drivers/char/tpm/tpm-dev-common.c
> +++ b/drivers/char/tpm/tpm-dev-common.c
> @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
>  	priv->response_read = false;
>  	*off = 0;
>  
> +	if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
> +		ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
> +	else
> +		ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
> +
> +	if (ret)
> +		goto out;
> +
>  	/*
>  	 * If in nonblocking mode schedule an async job to send
>  	 * the command return the size.
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index 9c9e5d75b37c78..9f4e64e22807a2 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -246,4 +246,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
>  void tpm_bios_log_teardown(struct tpm_chip *chip);
>  int tpm_dev_common_init(void);
>  void tpm_dev_common_exit(void);
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +#define TPM_RESTRICTED_PCR 23
> +
> +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> +#else
> +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> +				      size_t size)
> +{
> +	return 0;
> +}
> +
> +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> +				      size_t size)
> +{
> +	return 0;
> +}
> +#endif
>  #endif
> diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
> index cf64c738510529..1869e89215fcb9 100644
> --- a/drivers/char/tpm/tpm1-cmd.c
> +++ b/drivers/char/tpm/tpm1-cmd.c
> @@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)
>  
>  	return 0;
>  }
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> +{
> +	/*
> +	 * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
> +	 * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
> +	 * tunnelled transport sessions where the kernel would be unable to filter
> +	 * commands.
> +	 */
> +	return -EPERM;
> +}
> +#endif
> diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
> index 69126a6770386e..9c92a3e1e3f463 100644
> --- a/drivers/char/tpm/tpm2-cmd.c
> +++ b/drivers/char/tpm/tpm2-cmd.c
> @@ -821,3 +821,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
>  
>  	return -1;
>  }
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> +{
> +	int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
> +	__be32 *handle;
> +
> +	switch (cc) {
> +	case TPM2_CC_PCR_EXTEND:
> +	case TPM2_CC_PCR_RESET:
> +		if (size < (TPM_HEADER_SIZE + sizeof(u32)))
> +			return -EINVAL;
> +
> +		handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
> +		if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
> +			return -EPERM;
> +		break;
> +	}
> +
> +	return 0;
> +}
> +#endif
> -- 
> 2.31.0
> 

BR, Jarkko
Jarkko Sakkinen Sept. 30, 2022, 9:30 p.m. UTC | #3
On Tue, Sep 27, 2022 at 09:49:18AM -0700, Evan Green wrote:
> Enabling the kernel to be able to do encryption and integrity checks on
> the hibernate image prevents a malicious userspace from escalating to
> kernel execution via hibernation resume. As a first step toward this, add
> the scaffolding needed for the kernel to do AEAD encryption on the
> hibernate image, giving us both secrecy and integrity.
> 
> We currently hardwire the encryption to be gcm(aes) in 16-page chunks.
> This strikes a balance between minimizing the authentication tag
> overhead on storage, and keeping a modest sized staging buffer. With
> this chunk size, we'd generate 2MB of authentication tag data on an 8GB
> hiberation image.
> 
> The encryption currently sits on top of the core snapshot functionality,
> wired up only if requested in the uswsusp path. This could potentially
> be lowered into the common snapshot code given a mechanism to stitch the
> key contents into the image itself.
> 
> To avoid forcing usermode to deal with sequencing the auth tags in with
> the data, we stitch the auth tags in to the snapshot after each chunk of
> pages. This complicates the read and write functions, as we roll through
> the flow of (for read) 1) fill the staging buffer with encrypted data,
> 2) feed the data pages out to user mode, 3) feed the tag out to user
> mode. To avoid having each syscall return a small and variable amount
> of data, the encrypted versions of read and write operate in a loop,
> allowing an arbitrary amount of data through per syscall.
> 
> One alternative that would simplify things here would be a streaming
> interface to AEAD. Then we could just stream the entire hibernate image
> through directly, and handle a single tag at the end. However there is a
> school of thought that suggests a streaming interface to AEAD represents
> a loaded footgun, as it tempts the caller to act on the decrypted but
> not yet verified data, defeating the purpose of AEAD.
> 
> With this change alone, we don't actually protect ourselves from
> malicious userspace at all, since we kindly hand the key in plaintext
> to usermode. In later changes, we'll seal the key with the TPM
> before handing it back to usermode, so they can't decrypt or tamper with
> the key themselves.
> 
> Signed-off-by: Evan Green <evgreen@chromium.org>
> ---
> 
> (no changes since v1)
> 
>  Documentation/power/userland-swsusp.rst |   8 +
>  include/uapi/linux/suspend_ioctls.h     |  15 +-
>  kernel/power/Kconfig                    |  13 +
>  kernel/power/Makefile                   |   1 +
>  kernel/power/snapenc.c                  | 491 ++++++++++++++++++++++++
>  kernel/power/user.c                     |  40 +-
>  kernel/power/user.h                     | 101 +++++
>  7 files changed, 657 insertions(+), 12 deletions(-)
>  create mode 100644 kernel/power/snapenc.c
>  create mode 100644 kernel/power/user.h
> 
> diff --git a/Documentation/power/userland-swsusp.rst b/Documentation/power/userland-swsusp.rst
> index 1cf62d80a9ca10..f759915a78ce98 100644
> --- a/Documentation/power/userland-swsusp.rst
> +++ b/Documentation/power/userland-swsusp.rst
> @@ -115,6 +115,14 @@ SNAPSHOT_S2RAM
>  	to resume the system from RAM if there's enough battery power or restore
>  	its state on the basis of the saved suspend image otherwise)
>  
> +SNAPSHOT_ENABLE_ENCRYPTION
> +	Enables encryption of the hibernate image within the kernel. Upon suspend
> +	(ie when the snapshot device was opened for reading), returns a blob
> +	representing the random encryption key the kernel created to encrypt the
> +	hibernate image with. Upon resume (ie when the snapshot device was opened
> +	for writing), receives a blob from usermode containing the key material
> +	previously returned during hibernate.
> +
>  The device's read() operation can be used to transfer the snapshot image from
>  the kernel.  It has the following limitations:
>  
> diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> index bcce04e21c0dce..b73026ef824bb9 100644
> --- a/include/uapi/linux/suspend_ioctls.h
> +++ b/include/uapi/linux/suspend_ioctls.h
> @@ -13,6 +13,18 @@ struct resume_swap_area {
>  	__u32 dev;
>  } __attribute__((packed));
>  
> +#define USWSUSP_KEY_NONCE_SIZE 16
> +
> +/*
> + * This structure is used to pass the kernel's hibernate encryption key in
> + * either direction.
> + */
> +struct uswsusp_key_blob {
> +	__u32 blob_len;
> +	__u8 blob[512];
> +	__u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> +} __attribute__((packed));
> +
>  #define SNAPSHOT_IOC_MAGIC	'3'
>  #define SNAPSHOT_FREEZE			_IO(SNAPSHOT_IOC_MAGIC, 1)
>  #define SNAPSHOT_UNFREEZE		_IO(SNAPSHOT_IOC_MAGIC, 2)
> @@ -29,6 +41,7 @@ struct resume_swap_area {
>  #define SNAPSHOT_PREF_IMAGE_SIZE	_IO(SNAPSHOT_IOC_MAGIC, 18)
>  #define SNAPSHOT_AVAIL_SWAP_SIZE	_IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
>  #define SNAPSHOT_ALLOC_SWAP_PAGE	_IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
> -#define SNAPSHOT_IOC_MAXNR	20
> +#define SNAPSHOT_ENABLE_ENCRYPTION	_IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
> +#define SNAPSHOT_IOC_MAXNR	21
>  
>  #endif /* _LINUX_SUSPEND_IOCTLS_H */
> diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
> index 60a1d3051cc79a..cd574af0b43379 100644
> --- a/kernel/power/Kconfig
> +++ b/kernel/power/Kconfig
> @@ -92,6 +92,19 @@ config HIBERNATION_SNAPSHOT_DEV
>  
>  	  If in doubt, say Y.
>  
> +config ENCRYPTED_HIBERNATION
> +	bool "Encryption support for userspace snapshots"
> +	depends on HIBERNATION_SNAPSHOT_DEV
> +	depends on CRYPTO_AEAD2=y
> +	default n
> +	help
> +	  Enable support for kernel-based encryption of hibernation snapshots
> +	  created by uswsusp tools.
> +
> +	  Say N if userspace handles the image encryption.
> +
> +	  If in doubt, say N.
> +
>  config PM_STD_PARTITION
>  	string "Default resume partition"
>  	depends on HIBERNATION
> diff --git a/kernel/power/Makefile b/kernel/power/Makefile
> index 874ad834dc8daf..7be08f2e0e3b68 100644
> --- a/kernel/power/Makefile
> +++ b/kernel/power/Makefile
> @@ -16,6 +16,7 @@ obj-$(CONFIG_SUSPEND)		+= suspend.o
>  obj-$(CONFIG_PM_TEST_SUSPEND)	+= suspend_test.o
>  obj-$(CONFIG_HIBERNATION)	+= hibernate.o snapshot.o swap.o
>  obj-$(CONFIG_HIBERNATION_SNAPSHOT_DEV) += user.o
> +obj-$(CONFIG_ENCRYPTED_HIBERNATION) += snapenc.o
>  obj-$(CONFIG_PM_AUTOSLEEP)	+= autosleep.o
>  obj-$(CONFIG_PM_WAKELOCKS)	+= wakelock.o
>  
> diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> new file mode 100644
> index 00000000000000..cb90692d6ab83a
> --- /dev/null
> +++ b/kernel/power/snapenc.c
> @@ -0,0 +1,491 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* This file provides encryption support for system snapshots. */
> +
> +#include <linux/crypto.h>
> +#include <crypto/aead.h>
> +#include <crypto/gcm.h>
> +#include <linux/random.h>
> +#include <linux/mm.h>
> +#include <linux/uaccess.h>
> +
> +#include "power.h"
> +#include "user.h"
> +
> +/* Encrypt more data from the snapshot into the staging area. */
> +static int snapshot_encrypt_refill(struct snapshot_data *data)
> +{
> +
> +	u8 nonce[GCM_AES_IV_SIZE];
> +	int pg_idx;
> +	int res;
> +	struct aead_request *req = data->aead_req;
> +	DECLARE_CRYPTO_WAIT(wait);
> +	size_t total = 0;

Depends on subsystem (maintainers) but at least in x86 reverse
christmas tree notation order is preferred.

> +
> +	/*
> +	 * The first buffer is the associated data, set to the offset to prevent
> +	 * attacks that rearrange chunks.
> +	 */
> +	sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
> +
> +	/* Load the crypt buffer with snapshot pages. */
> +	for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
> +		void *buf = data->crypt_pages[pg_idx];
> +
> +		res = snapshot_read_next(&data->handle);
> +		if (res < 0)
> +			return res;
> +		if (res == 0)
> +			break;
> +
> +		WARN_ON(res != PAGE_SIZE);
> +
> +		/*
> +		 * Copy the page into the staging area. A future optimization
> +		 * could potentially skip this copy for lowmem pages.
> +		 */
> +		memcpy(buf, data_of(data->handle), PAGE_SIZE);
> +		sg_set_buf(&data->sg[1 + pg_idx], buf, PAGE_SIZE);
> +		total += PAGE_SIZE;
> +	}
> +
> +	sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
> +	aead_request_set_callback(req, 0, crypto_req_done, &wait);
> +	/*
> +	 * Use incrementing nonces for each chunk, since a 64 bit value won't
> +	 * roll into re-use for any given hibernate image.
> +	 */
> +	memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
> +	memcpy(&nonce[sizeof(data->nonce_low)],
> +	       &data->nonce_high,
> +	       sizeof(nonce) - sizeof(data->nonce_low));
> +
> +	data->nonce_low += 1;
> +	/* Total does not include AAD or the auth tag. */
> +	aead_request_set_crypt(req, data->sg, data->sg, total, nonce);
> +	res = crypto_wait_req(crypto_aead_encrypt(req), &wait);
> +	if (res)
> +		return res;
> +
> +	data->crypt_size = total;
> +	data->crypt_total += total;
> +	return 0;
> +}
> +
> +/* Decrypt data from the staging area and push it to the snapshot. */
> +static int snapshot_decrypt_drain(struct snapshot_data *data)
> +{
> +	u8 nonce[GCM_AES_IV_SIZE];
> +	int page_count;
> +	int pg_idx;
> +	int res;
> +	struct aead_request *req = data->aead_req;
> +	DECLARE_CRYPTO_WAIT(wait);
> +	size_t total;
> +
> +	/* Set up the associated data. */
> +	sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
> +
> +	/*
> +	 * Get the number of full pages, which could be short at the end. There
> +	 * should also be a tag at the end, so the offset won't be an even page.
> +	 */
> +	page_count = data->crypt_offset >> PAGE_SHIFT;
> +	total = page_count << PAGE_SHIFT;
> +	if ((total == 0) || (total == data->crypt_offset))
> +		return -EINVAL;
> +
> +	/*
> +	 * Load the sg list with the crypt buffer. Inline decrypt back into the
> +	 * staging buffer. A future optimization could decrypt directly into
> +	 * lowmem pages.
> +	 */
> +	for (pg_idx = 0; pg_idx < page_count; pg_idx++)
> +		sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);
> +
> +	/*
> +	 * It's possible this is the final decrypt, and there are fewer than
> +	 * CHUNK_SIZE pages. If this is the case we would have just written the
> +	 * auth tag into the first few bytes of a new page. Copy to the tag if
> +	 * so.
> +	 */
> +	if ((page_count < CHUNK_SIZE) &&
> +	    (data->crypt_offset - total) == sizeof(data->auth_tag)) {
> +
> +		memcpy(data->auth_tag,
> +			data->crypt_pages[pg_idx],
> +			sizeof(data->auth_tag));
> +
> +	} else if (data->crypt_offset !=
> +		   ((CHUNK_SIZE << PAGE_SHIFT) + SNAPSHOT_AUTH_TAG_SIZE)) {
> +
> +		return -EINVAL;
> +	}
> +
> +	sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
> +	aead_request_set_callback(req, 0, crypto_req_done, &wait);
> +	memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
> +	memcpy(&nonce[sizeof(data->nonce_low)],
> +	       &data->nonce_high,
> +	       sizeof(nonce) - sizeof(data->nonce_low));
> +
> +	data->nonce_low += 1;
> +	aead_request_set_crypt(req, data->sg, data->sg, total + SNAPSHOT_AUTH_TAG_SIZE, nonce);
> +	res = crypto_wait_req(crypto_aead_decrypt(req), &wait);
> +	if (res)
> +		return res;
> +
> +	data->crypt_size = 0;
> +	data->crypt_offset = 0;
> +
> +	/* Push the decrypted pages further down the stack. */
> +	total = 0;
> +	for (pg_idx = 0; pg_idx < page_count; pg_idx++) {
> +		void *buf = data->crypt_pages[pg_idx];
> +
> +		res = snapshot_write_next(&data->handle);
> +		if (res < 0)
> +			return res;
> +		if (res == 0)
> +			break;
> +
> +		if (!data_of(data->handle))
> +			return -EINVAL;
> +
> +		WARN_ON(res != PAGE_SIZE);
> +
> +		/*
> +		 * Copy the page into the staging area. A future optimization
> +		 * could potentially skip this copy for lowmem pages.
> +		 */
> +		memcpy(data_of(data->handle), buf, PAGE_SIZE);
> +		total += PAGE_SIZE;
> +	}
> +
> +	data->crypt_total += total;
> +	return 0;
> +}
> +
> +static ssize_t snapshot_read_next_encrypted(struct snapshot_data *data,
> +					    void **buf)
> +{
> +	size_t tag_off;
> +
> +	/* Refill the encrypted buffer if it's empty. */
> +	if ((data->crypt_size == 0) ||
> +	    (data->crypt_offset >=
> +	     (data->crypt_size + SNAPSHOT_AUTH_TAG_SIZE))) {
> +
> +		int rc;
> +
> +		data->crypt_size = 0;
> +		data->crypt_offset = 0;
> +		rc = snapshot_encrypt_refill(data);
> +		if (rc < 0)
> +			return rc;
> +	}
> +
> +	/* Return data pages if the offset is in that region. */
> +	if (data->crypt_offset < data->crypt_size) {
> +		size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> +		size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> +		*buf = data->crypt_pages[pg_idx] + pg_off;
> +		return PAGE_SIZE - pg_off;
> +	}
> +
> +	/* Use offsets just beyond the size to return the tag. */
> +	tag_off = data->crypt_offset - data->crypt_size;
> +	if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
> +		tag_off = SNAPSHOT_AUTH_TAG_SIZE;
> +
> +	*buf = data->auth_tag + tag_off;
> +	return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
> +}
> +
> +static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
> +					     void **buf)
> +{
> +	size_t tag_off;
> +
> +	/* Return data pages if the offset is in that region. */
> +	if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
> +		size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> +		size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> +		*buf = data->crypt_pages[pg_idx] + pg_off;
> +		return PAGE_SIZE - pg_off;
> +	}
> +
> +	/* Use offsets just beyond the size to return the tag. */
> +	tag_off = data->crypt_offset - (PAGE_SIZE * CHUNK_SIZE);
> +	if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
> +		tag_off = SNAPSHOT_AUTH_TAG_SIZE;
> +
> +	*buf = data->auth_tag + tag_off;
> +	return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
> +}
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> +	char __user *buf, size_t count, loff_t *offp)
> +{
> +	ssize_t total = 0;
> +
> +	/* Loop getting buffers of varying sizes and copying to userspace. */
> +	while (count) {
> +		size_t copy_size;
> +		size_t not_done;
> +		void *src;
> +		ssize_t src_size = snapshot_read_next_encrypted(data, &src);
> +
> +		if (src_size <= 0) {
> +			if (total == 0)
> +				return src_size;
> +
> +			break;
> +		}
> +
> +		copy_size = min(count, (size_t)src_size);
> +		not_done = copy_to_user(buf + total, src, copy_size);
> +		copy_size -= not_done;
> +		total += copy_size;
> +		count -= copy_size;
> +		data->crypt_offset += copy_size;
> +		if (copy_size == 0) {
> +			if (total == 0)
> +				return -EFAULT;
> +
> +			break;
> +		}
> +	}
> +
> +	*offp += total;
> +	return total;
> +}
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> +	const char __user *buf, size_t count, loff_t *offp)
> +{
> +	ssize_t total = 0;
> +
> +	/* Loop getting buffers of varying sizes and copying from. */
> +	while (count) {
> +		size_t copy_size;
> +		size_t not_done;
> +		void *dst;
> +		ssize_t dst_size = snapshot_write_next_encrypted(data, &dst);
> +
> +		if (dst_size <= 0) {
> +			if (total == 0)
> +				return dst_size;
> +
> +			break;
> +		}
> +
> +		copy_size = min(count, (size_t)dst_size);
> +		not_done = copy_from_user(dst, buf + total, copy_size);
> +		copy_size -= not_done;
> +		total += copy_size;
> +		count -= copy_size;
> +		data->crypt_offset += copy_size;
> +		if (copy_size == 0) {
> +			if (total == 0)
> +				return -EFAULT;
> +
> +			break;
> +		}
> +
> +		/* Drain the encrypted buffer if it's full. */
> +		if ((data->crypt_offset >=
> +		    ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
> +
> +			int rc;
> +
> +			rc = snapshot_decrypt_drain(data);
> +			if (rc < 0)
> +				return rc;
> +		}
> +	}
> +
> +	*offp += total;
> +	return total;
> +}
> +
> +void snapshot_teardown_encryption(struct snapshot_data *data)
> +{
> +	int i;
> +
> +	if (data->aead_req) {
> +		aead_request_free(data->aead_req);
> +		data->aead_req = NULL;
> +	}
> +
> +	if (data->aead_tfm) {
> +		crypto_free_aead(data->aead_tfm);
> +		data->aead_tfm = NULL;
> +	}
> +
> +	for (i = 0; i < CHUNK_SIZE; i++) {
> +		if (data->crypt_pages[i]) {
> +			free_page((unsigned long)data->crypt_pages[i]);
> +			data->crypt_pages[i] = NULL;
> +		}
> +	}
> +}
> +
> +static int snapshot_setup_encryption_common(struct snapshot_data *data)
> +{
> +	int i, rc;
> +
> +	data->crypt_total = 0;
> +	data->crypt_offset = 0;
> +	data->crypt_size = 0;
> +	memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
> +	/* This only works once per hibernate. */
> +	if (data->aead_tfm)
> +		return -EINVAL;
> +
> +	/* Set up the encryption transform */
> +	data->aead_tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
> +	if (IS_ERR(data->aead_tfm)) {
> +		rc = PTR_ERR(data->aead_tfm);
> +		data->aead_tfm = NULL;
> +		return rc;
> +	}
> +
> +	rc = -ENOMEM;
> +	data->aead_req = aead_request_alloc(data->aead_tfm, GFP_KERNEL);
> +	if (data->aead_req == NULL)
> +		goto setup_fail;
> +
> +	/* Allocate the staging area */
> +	for (i = 0; i < CHUNK_SIZE; i++) {
> +		data->crypt_pages[i] = (void *)__get_free_page(GFP_ATOMIC);
> +		if (data->crypt_pages[i] == NULL)
> +			goto setup_fail;
> +	}
> +
> +	sg_init_table(data->sg, CHUNK_SIZE + 2);
> +
> +	/*
> +	 * The associated data will be the offset so that blocks can't be
> +	 * rearranged.
> +	 */
> +	aead_request_set_ad(data->aead_req, sizeof(data->crypt_total));
> +	rc = crypto_aead_setauthsize(data->aead_tfm, SNAPSHOT_AUTH_TAG_SIZE);
> +	if (rc)
> +		goto setup_fail;
> +
> +	return 0;
> +
> +setup_fail:
> +	snapshot_teardown_encryption(data);
> +	return rc;
> +}
> +
> +int snapshot_get_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key)
> +{
> +	u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE];
> +	u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> +	int rc;
> +	/* Don't pull a random key from a world that can be reset. */
> +	if (data->ready)
> +		return -EPIPE;
> +
> +	rc = snapshot_setup_encryption_common(data);
> +	if (rc)
> +		return rc;
> +
> +	/* Build a random starting nonce. */
> +	get_random_bytes(nonce, sizeof(nonce));
> +	memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low));
> +	memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high));
> +	/* Build a random key */
> +	get_random_bytes(aead_key, sizeof(aead_key));
> +	rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key));
> +	if (rc)
> +		goto fail;
> +
> +	/* Hand the key back to user mode (to be changed!) */
> +	rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len);
> +	if (rc)
> +		goto fail;
> +
> +	rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key));
> +	if (rc)
> +		goto fail;
> +
> +	rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce));
> +	if (rc)
> +		goto fail;
> +
> +	return 0;
> +
> +fail:
> +	snapshot_teardown_encryption(data);
> +	return rc;
> +}
> +
> +int snapshot_set_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key)
> +{
> +	struct uswsusp_key_blob blob;
> +	int rc;
> +
> +	/* It's too late if data's been pushed in. */
> +	if (data->handle.cur)
> +		return -EPIPE;
> +
> +	rc = snapshot_setup_encryption_common(data);
> +	if (rc)
> +		return rc;
> +
> +	/* Load the key from user mode. */
> +	rc = copy_from_user(&blob, key, sizeof(struct uswsusp_key_blob));
> +	if (rc)
> +		goto crypto_setup_fail;
> +
> +	if (blob.blob_len != sizeof(struct uswsusp_key_blob)) {
> +		rc = -EINVAL;
> +		goto crypto_setup_fail;
> +	}
> +
> +	rc = crypto_aead_setkey(data->aead_tfm,
> +				blob.blob,
> +				SNAPSHOT_ENCRYPTION_KEY_SIZE);
> +
> +	if (rc)
> +		goto crypto_setup_fail;
> +
> +	/* Load the starting nonce. */
> +	memcpy(&data->nonce_low, &blob.nonce[0], sizeof(data->nonce_low));
> +	memcpy(&data->nonce_high, &blob.nonce[8], sizeof(data->nonce_high));
> +	return 0;
> +
> +crypto_setup_fail:
> +	snapshot_teardown_encryption(data);
> +	return rc;
> +}
> +
> +loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> +{
> +	loff_t pages = raw_size >> PAGE_SHIFT;
> +	loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
> +	/*
> +	 * The encrypted size is the normal size, plus a stitched in
> +	 * authentication tag for every chunk of pages.
> +	 */
> +	return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> +}
> +
> +int snapshot_finalize_decrypted_image(struct snapshot_data *data)
> +{
> +	int rc;
> +
> +	if (data->crypt_offset != 0) {
> +		rc = snapshot_decrypt_drain(data);
> +		if (rc)
> +			return rc;
> +	}
> +
> +	return 0;
> +}
> diff --git a/kernel/power/user.c b/kernel/power/user.c
> index 3a4e70366f354c..bba5cdbd2c0239 100644
> --- a/kernel/power/user.c
> +++ b/kernel/power/user.c
> @@ -25,19 +25,10 @@
>  #include <linux/uaccess.h>
>  
>  #include "power.h"
> +#include "user.h"
>  
>  static bool need_wait;
> -
> -static struct snapshot_data {
> -	struct snapshot_handle handle;
> -	int swap;
> -	int mode;
> -	bool frozen;
> -	bool ready;
> -	bool platform_support;
> -	bool free_bitmaps;
> -	dev_t dev;
> -} snapshot_state;
> +struct snapshot_data snapshot_state;
>  
>  int is_hibernate_resume_dev(dev_t dev)
>  {
> @@ -122,6 +113,7 @@ static int snapshot_release(struct inode *inode, struct file *filp)
>  	} else if (data->free_bitmaps) {
>  		free_basic_memory_bitmaps();
>  	}
> +	snapshot_teardown_encryption(data);
>  	pm_notifier_call_chain(data->mode == O_RDONLY ?
>  			PM_POST_HIBERNATION : PM_POST_RESTORE);
>  	hibernate_release();
> @@ -146,6 +138,12 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
>  		res = -ENODATA;
>  		goto Unlock;
>  	}
> +
> +	if (snapshot_encryption_enabled(data)) {
> +		res = snapshot_read_encrypted(data, buf, count, offp);
> +		goto Unlock;
> +	}
> +
>  	if (!pg_offp) { /* on page boundary? */
>  		res = snapshot_read_next(&data->handle);
>  		if (res <= 0)
> @@ -182,6 +180,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
>  
>  	data = filp->private_data;
>  
> +	if (snapshot_encryption_enabled(data)) {
> +		res = snapshot_write_encrypted(data, buf, count, offp);
> +		goto unlock;
> +	}
> +
>  	if (!pg_offp) {
>  		res = snapshot_write_next(&data->handle);
>  		if (res <= 0)
> @@ -317,6 +320,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
>  		break;
>  
>  	case SNAPSHOT_ATOMIC_RESTORE:
> +		if (snapshot_encryption_enabled(data)) {
> +			error = snapshot_finalize_decrypted_image(data);
> +			if (error)
> +				break;
> +		}
> +
>  		snapshot_write_finalize(&data->handle);
>  		if (data->mode != O_WRONLY || !data->frozen ||
>  		    !snapshot_image_loaded(&data->handle)) {
> @@ -352,6 +361,8 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
>  		}
>  		size = snapshot_get_image_size();
>  		size <<= PAGE_SHIFT;
> +		if (snapshot_encryption_enabled(data))
> +			size = snapshot_get_encrypted_image_size(size);
>  		error = put_user(size, (loff_t __user *)arg);
>  		break;
>  
> @@ -409,6 +420,13 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
>  		error = snapshot_set_swap_area(data, (void __user *)arg);
>  		break;
>  
> +	case SNAPSHOT_ENABLE_ENCRYPTION:
> +		if (data->mode == O_RDONLY)
> +			error = snapshot_get_encryption_key(data, (void __user *)arg);
> +		else
> +			error = snapshot_set_encryption_key(data, (void __user *)arg);
> +		break;
> +
>  	default:
>  		error = -ENOTTY;
>  
> diff --git a/kernel/power/user.h b/kernel/power/user.h
> new file mode 100644
> index 00000000000000..6823e2eba7ec53
> --- /dev/null
> +++ b/kernel/power/user.h
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/crypto.h>
> +#include <crypto/aead.h>
> +#include <crypto/aes.h>
> +
> +#define SNAPSHOT_ENCRYPTION_KEY_SIZE AES_KEYSIZE_128
> +#define SNAPSHOT_AUTH_TAG_SIZE 16
> +
> +/* Define the number of pages in a single AEAD encryption chunk. */
> +#define CHUNK_SIZE 16
> +
> +struct snapshot_data {
> +	struct snapshot_handle handle;
> +	int swap;
> +	int mode;
> +	bool frozen;
> +	bool ready;
> +	bool platform_support;
> +	bool free_bitmaps;
> +	dev_t dev;
> +
> +#if defined(CONFIG_ENCRYPTED_HIBERNATION)
> +	struct crypto_aead *aead_tfm;
> +	struct aead_request *aead_req;
> +	void *crypt_pages[CHUNK_SIZE];
> +	u8 auth_tag[SNAPSHOT_AUTH_TAG_SIZE];
> +	struct scatterlist sg[CHUNK_SIZE + 2]; /* Add room for AD and auth tag. */
> +	size_t crypt_offset;
> +	size_t crypt_size;
> +	uint64_t crypt_total;
> +	uint64_t nonce_low;
> +	uint64_t nonce_high;
> +#endif
> +
> +};
> +
> +extern struct snapshot_data snapshot_state;
> +
> +/* kernel/power/swapenc.c routines */
> +#if defined(CONFIG_ENCRYPTED_HIBERNATION)
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> +	char __user *buf, size_t count, loff_t *offp);
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> +	const char __user *buf, size_t count, loff_t *offp);
> +
> +void snapshot_teardown_encryption(struct snapshot_data *data);
> +int snapshot_get_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key);
> +
> +int snapshot_set_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key);


These do not look properly aligned.

At least for the last one you could put it to a single line, since the
length is only 97 characters:

int snapshot_set_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key);

> +
> +#define snapshot_encryption_enabled(data) (!!(data)->aead_tfm)
> +
> +#else
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> +	char __user *buf, size_t count, loff_t *offp)
> +{
> +	return -ENOTTY;
> +}
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> +	const char __user *buf, size_t count, loff_t *offp)
> +{
> +	return -ENOTTY;
> +}
> +
> +static void snapshot_teardown_encryption(struct snapshot_data *data) {}
> +static int snapshot_get_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key)
> +{
> +	return -ENOTTY;
> +}
> +
> +static int snapshot_set_encryption_key(struct snapshot_data *data,
> +	struct uswsusp_key_blob __user *key)
> +{
> +	return -ENOTTY;
> +}
> +
> +static loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> +{
> +	return raw_size;
> +}
> +
> +static int snapshot_finalize_decrypted_image(struct snapshot_data *data)
> +{
> +	return -ENOTTY;
> +}
> +
> +#define snapshot_encryption_enabled(data) (0)
> +
> +#endif
> -- 
> 2.31.0
> 

BR, Jarkko