diff mbox series

[v2,1/3] staging: ccree: copy IV to DMAable memory

Message ID 1509610224-14708-2-git-send-email-gilad@benyossef.com
State Accepted
Commit e7cdcba451b3ddd68e714875be13e782577c9e4a
Headers show
Series [v2,1/3] staging: ccree: copy IV to DMAable memory | expand

Commit Message

Gilad Ben-Yossef Nov. 2, 2017, 8:10 a.m. UTC
We are being passed an IV buffer from unknown origin, which may be
stack allocated and thus not safe for DMA. Allocate a DMA safe
buffer for the IV and use that instead.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>

---
 drivers/staging/ccree/ssi_cipher.c | 20 ++++++++++++++++++--
 drivers/staging/ccree/ssi_cipher.h |  1 +
 2 files changed, 19 insertions(+), 2 deletions(-)

-- 
2.7.4

Comments

Horia Geanta Nov. 8, 2017, 10:26 a.m. UTC | #1
On 11/2/2017 10:14 AM, Gilad Ben-Yossef wrote:
> We are being passed an IV buffer from unknown origin, which may be
> stack allocated and thus not safe for DMA. Allocate a DMA safe
> buffer for the IV and use that instead.
> 
IIUC this fixes only the (a)blkcipher / skcipher algorithms.
What about aead, authenc?

The fact that only the skcipher tcrypt tests use IVs on stack doesn't
mean aead, authenc implementations are safe - other crypto API users
could provide IVs laying in non-DMAable memory.

To reiterate, the proper approach is to fix the crypto API to guarantee
IVs are DMAable.
However Herbert suggests he is not willing to do this work:
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg28821.html

A few high-level details mentioning what this implies would be helpful,
in case somebody else decides its worth pursuing this path.

The compromise is to fix all crypto drivers that need DMAable IVs.
IMHO this is suboptimal, both in terms of performance (memory
allocation, memcpy) and increased code complexity.

Horia
Herbert Xu Nov. 8, 2017, 12:07 p.m. UTC | #2
On Wed, Nov 08, 2017 at 01:54:03PM +0200, Gilad Ben-Yossef wrote:
>

> As a HW based crypto driver maintainer I sympathize, but let's play

> devil's advocate for a second here:

> 

> In the current state, HW based crypto drivers need to allocate a buffer

> and copy the IV, because they don't know if they got a DMAable buffer

> or not. SW based crypto drivers don't need to do anything.


When I suggested an API change, I was thinking of an option of
supplying an SG list instead of the current kernel pointer.

IOW the existing users do not have to change but where we know
that the pointer can be DMAed you can opt in to the new interface.

The crypto API can then provide a helper to do the copying where
necessary.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
diff mbox series

Patch

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 78706f5..0b69103 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -695,6 +695,7 @@  static int ssi_blkcipher_complete(struct device *dev,
 	struct ablkcipher_request *req = (struct ablkcipher_request *)areq;
 
 	ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+	kfree(req_ctx->iv);
 
 	/*Decrease the inflight counter*/
 	if (ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0)
@@ -757,6 +758,17 @@  static int ssi_blkcipher_process(
 		rc = 0;
 		goto exit_process;
 	}
+
+	/* The IV we are handed may be allocted from the stack so
+	 * we must copy it to a DMAable buffer before use.
+	 */
+	req_ctx->iv = kmalloc(ivsize, GFP_KERNEL);
+	if (!req_ctx->iv) {
+		rc = -ENOMEM;
+		goto exit_process;
+	}
+	memcpy(req_ctx->iv, info, ivsize);
+
 	/*For CTS in case of data size aligned to 16 use CBC mode*/
 	if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)) {
 		ctx_p->cipher_mode = DRV_CIPHER_CBC;
@@ -778,7 +790,9 @@  static int ssi_blkcipher_process(
 
 	/* STAT_PHASE_1: Map buffers */
 
-	rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst);
+	rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx,
+						  ivsize, nbytes, req_ctx->iv,
+						  src, dst);
 	if (unlikely(rc != 0)) {
 		dev_err(dev, "map_request() failed\n");
 		goto exit_process;
@@ -830,8 +844,10 @@  static int ssi_blkcipher_process(
 	if (cts_restore_flag != 0)
 		ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS;
 
-	if (rc != -EINPROGRESS)
+	if (rc != -EINPROGRESS) {
 		kfree(req_ctx->backup_info);
+		kfree(req_ctx->iv);
+	}
 
 	return rc;
 }
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index f499962..25e6335 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -43,6 +43,7 @@  struct blkcipher_req_ctx {
 	u32 out_nents;
 	u32 out_mlli_nents;
 	u8 *backup_info; /*store iv for generated IV flow*/
+	u8 *iv;
 	bool is_giv;
 	struct mlli_params mlli_params;
 };