Message ID | 20210611152659.2142983-1-tientzu@chromium.org |
---|---|
Headers | show |
Series | Restricted DMA | expand |
On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote: > + spin_lock_init(&mem->lock); > + for (i = 0; i < mem->nslabs; i++) { > + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); > + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; > + mem->slots[i].alloc_size = 0; > + } > + > + if (memory_decrypted) > + set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); > + memset(vaddr, 0, bytes); We don't really need to do this call before the memset. Which means we can just move it to the callers that care instead of having a bool argument. Otherwise looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Fri, Jun 11, 2021 at 11:26:47PM +0800, Claire Chang wrote: > Split the debugfs creation to make the code reusable for supporting > different bounce buffer pools, e.g. restricted DMA pool. > > Signed-off-by: Claire Chang <tientzu@chromium.org> > --- > kernel/dma/swiotlb.c | 23 ++++++++++++++++------- > 1 file changed, 16 insertions(+), 7 deletions(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 1a1208c81e85..8a3e2b3b246d 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -64,6 +64,9 @@ > enum swiotlb_force swiotlb_force; > > struct io_tlb_mem *io_tlb_default_mem; > +#ifdef CONFIG_DEBUG_FS > +static struct dentry *debugfs_dir; > +#endif What about moving this declaration into the main CONFIG_DEBUG_FS block near the functions using it? Otherwise looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Fri, Jun 11, 2021 at 11:26:53PM +0800, Claire Chang wrote: > Move the maintenance of alloc_size to find_slots for better code > reusability later. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
I think merging this with the next two patches would be a little more clear.
On Mon, Jun 14, 2021 at 08:28:01AM +0200, Christoph Hellwig wrote: > I think merging this with the next two patches would be a little more > clear. Sorry, I mean the next patch and the previous one.
On Mon, Jun 14, 2021 at 2:16 PM Christoph Hellwig <hch@lst.de> wrote: > > On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote: > > + spin_lock_init(&mem->lock); > > + for (i = 0; i < mem->nslabs; i++) { > > + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); > > + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; > > + mem->slots[i].alloc_size = 0; > > + } > > + > > + if (memory_decrypted) > > + set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); > > + memset(vaddr, 0, bytes); > > We don't really need to do this call before the memset. Which means we > can just move it to the callers that care instead of having a bool > argument. > > Otherwise looks good: > > Reviewed-by: Christoph Hellwig <hch@lst.de> Thanks for the review. Will wait more days for other reviews and send v10 to address the comments in this and other patches.
v10 here: https://lore.kernel.org/patchwork/cover/1446882/