Message ID | 20220219005221.634-1-bhe@redhat.com |
---|---|
Headers | show |
Series | Don't use kmalloc() with GFP_DMA | expand |
On Sat, 19 Feb 2022 08:52:16 +0800 Baoquan He wrote: > dma_pool_alloc() uses dma_alloc_coherent() to pre-allocate DMA buffer, > so it's redundent to specify GFP_DMA when calling. > > Signed-off-by: Baoquan He <bhe@redhat.com> This and the other two netdev patches in the series are perfectly cleanups reasonable even outside of the larger context. Please repost those separately and make sure you CC the maintainers of the drivers.
On Sat, Feb 19, 2022 at 08:52:00AM +0800, Baoquan He wrote: > The gfp assignment has been commented out in ancient times, combined with > the code comment, obviously it's not needed since then. Let's remove the > whole ifdeffery block so that GFP_DMA searching won't point to this. Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:02AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:04AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). > > [ 42.hyeyoo@gmail.com: Update changelog ] > > Signed-off-by: Baoquan He <bhe@redhat.com> > Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:06AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:08AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). replace it with GFP_KERNEL. Plase avoid the overly long line. The rest looks good.
On Sat, Feb 19, 2022 at 08:52:10AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:12AM +0800, Baoquan He wrote: > dma_alloc_coherent() allocates dma buffer with device's addressing > limitation in mind. It's redundent to specify GFP_DMA when calling > dma_alloc_coherent(). Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:14AM +0800, Baoquan He wrote: > dma_pool_alloc() uses dma_alloc_coherent() to pre-allocate DMA buffer, > so it's redundent to specify GFP_DMA32 when calling. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:16AM +0800, Baoquan He wrote: > dma_pool_alloc() uses dma_alloc_coherent() to pre-allocate DMA buffer, > so it's redundent to specify GFP_DMA when calling. Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:18AM +0800, Baoquan He wrote: > Use dma_alloc_noncoherent() instead to get the DMA buffer. > > [ 42.hyeyoo@gmail.com: Use dma_alloc_noncoherent() instead of > __get_free_pages. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
On Sat, Feb 19, 2022 at 08:52:20AM +0800, Baoquan He wrote: > if (request_dma(dma, DRIVER_NAME)) > goto err; > > + dma_set_mask_and_coherent(mmc_dev(host->mmc), DMA_BIT_MASK(24)); This also sets the streaming mask, but the driver doesn't seem to make use of that. Please document it in the commit log. Also setting smaller than 32 bit masks can fail, so this should have error handling.
On 02/19/22 at 07:51am, Wolfram Sang wrote: > > > --- a/drivers/staging/media/imx/imx-media-utils.c > > $subject says 'emxx_udc' instead of 'imx: media-utils'. Ah, good catch. It should be wrongly copied from the patch 12, will fix it, thanks.
On 02/19/22 at 08:17am, Christoph Hellwig wrote: > On Sat, Feb 19, 2022 at 08:52:20AM +0800, Baoquan He wrote: > > if (request_dma(dma, DRIVER_NAME)) > > goto err; > > > > + dma_set_mask_and_coherent(mmc_dev(host->mmc), DMA_BIT_MASK(24)); > > This also sets the streaming mask, but the driver doesn't seem to make > use of that. Please document it in the commit log. Thanks for reviewing. I will change it to dma_set_mask(), and describe this change in patch log. > > Also setting smaller than 32 bit masks can fail, so this should have > error handling. OK, will check and add error handling.
On Tue, Feb 22, 2022 at 09:44:22AM +0100, Christoph Hellwig wrote: > On Mon, Feb 21, 2022 at 02:57:34PM +0100, Heiko Carstens wrote: > > > 1) Kmalloc(GFP_DMA) in s390 platform, under arch/s390 and drivers/s390; > > > > So, s390 partially requires GFP_DMA allocations for memory areas which > > are required by the hardware to be below 2GB. There is not necessarily > > a device associated when this is required. E.g. some legacy "diagnose" > > calls require buffers to be below 2GB. > > > > How should something like this be handled? I'd guess that the > > dma_alloc API is not the right thing to use in such cases. Of course > > we could say, let's waste memory and use full pages instead, however > > I'm not sure this is a good idea. > > Yeah, I don't think the DMA API is the right thing for that. This > is one of the very rare cases where a raw allocation makes sense. > > That being said being able to drop kmalloc support for GFP_DMA would > be really useful. How much memory would we waste if switching to the > page allocator? At a first glance this would not waste much memory, since most callers seem to allocate such memory pieces only temporarily. > > The question is: what would this buy us? As stated above I'd assume > > this comes with quite some code churn, so there should be a good > > reason to do this. > > There is two steps here. One is to remove GFP_DMA support from > kmalloc, which would help to cleanup the slab allocator(s) very nicely, > as at that point it can stop to be zone aware entirely. Well, looking at slub.c it looks like there is only a very minimal maintenance burden for GPF_DMA/GFP_DMA32 support. > The long term goal is to remove ZONE_DMA entirely at least for > architectures that only use the small 16MB ISA-style one. It can > then be replaced with for example a CMA area and fall into a movable > zone. I'd have to prototype this first and see how it applies to the > s390 case. It might not be worth it and maybe we should replace > ZONE_DMA and ZONE_DMA32 with a ZONE_LIMITED for those use cases as > the amount covered tends to not be totally out of line for what we > built the zone infrastructure. So probably I'm missing something; but for small systems where we would only have ZONE_DMA, how would a CMA area within this zone improve things? If I'm not mistaken then the page allocator will not fallback to any CMA area for GFP_KERNEL allocations. That is: we would somehow need to find "the right size" for the CMA area, depending on memory size. This looks like a new problem class which currently does not exist. Besides that we would also not have all the debugging options provided by the slab allocator anymore. Anyway, maybe it would make more sense if you would send your patch and then we can see where we would end up.
On Wed, Feb 23, 2022 at 08:18:08PM +0100, Heiko Carstens wrote: > > The long term goal is to remove ZONE_DMA entirely at least for > > architectures that only use the small 16MB ISA-style one. It can > > then be replaced with for example a CMA area and fall into a movable > > zone. I'd have to prototype this first and see how it applies to the > > s390 case. It might not be worth it and maybe we should replace > > ZONE_DMA and ZONE_DMA32 with a ZONE_LIMITED for those use cases as > > the amount covered tends to not be totally out of line for what we > > built the zone infrastructure. > > So probably I'm missing something; but for small systems where we > would only have ZONE_DMA, how would a CMA area within this zone > improve things? It would not, but more importantly we would not need it at all. The thinking here is really about the nasty 16MB ISA-style zone DMA. a 31-bit something rather different.
On 02/23/22 at 03:25pm, Christoph Hellwig wrote: > On Wed, Feb 23, 2022 at 08:28:13AM +0800, Baoquan He wrote: > > Could you tell more why this is wrong? According to > > Documentation/core-api/dma-api.rst and DMA code, __dma_alloc_pages() is > > the core function of dma_alloc_pages()/dma_alloc_noncoherent() which are > > obviously streaming mapping, > > Why are they "obviously" streaming mappings? Because they are obviously not coherent mapping? With my understanding, there are two kinds of DMA mapping, coherent mapping (which is also persistent mapping), and streaming mapping. The coherent mapping will be handled during driver init, and released during driver de-init. While streaming mapping will be done when needed at any time, and released after usage. Are we going to add another kind of mapping? It's not streaming mapping, but use dev->coherent_dma_mask, just because it uses dma_alloc_xxx() api. > > > why do we need to check > > dev->coherent_dma_mask here? Because dev->coherent_dma_mask is the subset > > of dev->dma_mask, it's safer to use dev->coherent_dma_mask in these > > places? This is confusing, I talked to Hyeonggon in private mail, he has > > the same feeling. > > Think of th coherent_dma_mask as dma_alloc_mask. It is the mask for the > DMA memory allocator. dma_mask is the mask for the dma_map_* routines. I will check code further. While this may need be noted in doc, e.g dma_api.rst or dma-api-howto.rst. If you have guide, I can try to add some words to make clear this. Or leave this to people who knows this clearly. I believe it will be very helpful to understand DMA api.
From: Baoquan He > Sent: 24 February 2022 14:11 ... > With my understanding, there are two kinds of DMA mapping, coherent > mapping (which is also persistent mapping), and streaming mapping. The > coherent mapping will be handled during driver init, and released during > driver de-init. While streaming mapping will be done when needed at any > time, and released after usage. The lifetime has absolutely nothing to do with it. It is all about how the DMA cycles (from the device) interact with (or more don't interact with) the cpu memory cache. For coherent mapping the cpu and device can write to (different) words in the same cache line at the same time, and both will see both updates. On some systems this can only be achieved by making the memory uncached - which significantly slows down cpu access. For non-coherent (streaming) mapping the cpu writes back and/or invalidates the data cache so that the dma read cycles from memory read the correct data and the cpu re-reads the cache line after the dma has completed. They are only really suitable for data buffers. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 02/24/22 at 02:27pm, David Laight wrote: > From: Baoquan He > > Sent: 24 February 2022 14:11 > ... > > With my understanding, there are two kinds of DMA mapping, coherent > > mapping (which is also persistent mapping), and streaming mapping. The > > coherent mapping will be handled during driver init, and released during > > driver de-init. While streaming mapping will be done when needed at any > > time, and released after usage. > > The lifetime has absolutely nothing to do with it. > > It is all about how the DMA cycles (from the device) interact with > (or more don't interact with) the cpu memory cache. > > For coherent mapping the cpu and device can write to (different) > words in the same cache line at the same time, and both will see > both updates. > On some systems this can only be achieved by making the memory > uncached - which significantly slows down cpu access. > > For non-coherent (streaming) mapping the cpu writes back and/or > invalidates the data cache so that the dma read cycles from memory > read the correct data and the cpu re-reads the cache line after > the dma has completed. > They are only really suitable for data buffers. Thanks for valuable input, I agree the lifetime is not stuff we can rely on to judge. But how do we explain dma_alloc_noncoherent() is not streaming mapping? Then which kind of dma mapping is it? I could miss something important to understand this which is obvious to other people, I will make time to check.