diff mbox series

[19/30,for,5.4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails

Message ID 20200925161916.204667-20-pgonda@google.com
State New
Headers show
Series Backport unencrypted non-blocking DMA allocations | expand

Commit Message

Peter Gonda Sept. 25, 2020, 4:19 p.m. UTC
From: David Rientjes <rientjes@google.com>

upstream 96a539fa3bb71f443ae08e57b9f63d6e5bb2207c commit.

If arch_dma_set_uncached() fails after memory has been decrypted, it needs
to be re-encrypted before freeing.

Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more general")
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 kernel/dma/direct.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index e72bb0dc8150..b4a5b7076399 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -234,7 +234,7 @@  void *dma_direct_alloc_pages(struct device *dev, size_t size,
 		arch_dma_prep_coherent(page, size);
 		ret = arch_dma_set_uncached(ret, size);
 		if (IS_ERR(ret))
-			goto out_free_pages;
+			goto out_encrypt_pages;
 	}
 done:
 	if (force_dma_unencrypted(dev))
@@ -242,6 +242,11 @@  void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	else
 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
 	return ret;
+
+out_encrypt_pages:
+	if (force_dma_unencrypted(dev))
+		set_memory_encrypted((unsigned long)page_address(page),
+				     1 << get_order(size));
 out_free_pages:
 	dma_free_contiguous(dev, page, size);
 	return NULL;