diff mbox series

[RESEND,v2,50/53] mtd: nand: allocate aligned buffers if NAND_OWN_BUFFERS is unset

Message ID 1490228282-10805-24-git-send-email-yamada.masahiro@socionext.com
State Superseded
Headers show
Series None | expand

Commit Message

Masahiro Yamada March 23, 2017, 12:17 a.m. UTC
Commit 66507c7bc889 ("mtd: nand: Add support to use nand_base poi
databuf as bounce buffer") fixed the issue that drivers can be
passed with a kmap()'d buffer.  This addressed the use_bufpoi = 0
case.

When use_bufpoi = 1, chip->buffers->databuf is used.  The databuf
allocated by nand_scan_tail() is not suitable for DMA due to the
offset, sizeof(*chip->buffers).

So, drivers using DMA are very likely to end up with setting the
NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers
tend to use devm_k*alloc to simplify the probe failure path, but
devm_k*alloc() does not return a cache line aligned buffer.

If used, it could violate the DMA-API requirement stated in
Documentation/DMA-API.txt:
  Warnings:  Memory coherency operates at a granularity called the
  cache line width.  In order for memory mapped by this API to
  operate correctly, the mapped region must begin exactly on a cache
  line boundary and end exactly on one (to prevent two separately
  mapped regions from sharing a single cache line).

To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.
nand_scan_tail() can give a separate buffer for each of ecccalc,
ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned
with ARCH_DMA_MINALIGN.  This is enough for most drivers because
it is rare that DMA engines require larger alignment than CPU's
cache line.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>

---

Changes in v2:
  - Newly added

 drivers/mtd/nand/nand_base.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

-- 
2.7.4


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

Comments

Boris Brezillon March 27, 2017, 8 a.m. UTC | #1
Hi Masahiro,

On Thu, 23 Mar 2017 09:17:59 +0900
Masahiro Yamada <yamada.masahiro@socionext.com> wrote:

> Commit 66507c7bc889 ("mtd: nand: Add support to use nand_base poi

> databuf as bounce buffer") fixed the issue that drivers can be

> passed with a kmap()'d buffer.  This addressed the use_bufpoi = 0

> case.

> 

> When use_bufpoi = 1, chip->buffers->databuf is used.  The databuf

> allocated by nand_scan_tail() is not suitable for DMA due to the

> offset, sizeof(*chip->buffers).


As said earlier, I'm fine with the patch content, but I'm not sure
about your explanation. Alignment requirements are DMA controller
dependent so you're not exactly fixing a bug for all NAND controller
drivers, just those that require >4 bytes alignment.

Regarding the cache line alignment issue, in this case it shouldn't be
a problem, because the driver and the core shouldn't touch the
chip->buffers object during a DMA transfer, so dma_map/unmap_single()
should work fine (they may flush/invalidate one cache line entry that
contains non-payload data, but that's not a problem as long as nothing
is modified during the DMA transfer).

> 

> So, drivers using DMA are very likely to end up with setting the

> NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers

> tend to use devm_k*alloc to simplify the probe failure path, but

> devm_k*alloc() does not return a cache line aligned buffer.


Oh, I didn't know that. I had a closer look at the code, and indeed,
devm_kmalloc() does not guarantee any alignment.

> 

> If used, it could violate the DMA-API requirement stated in

> Documentation/DMA-API.txt:

>   Warnings:  Memory coherency operates at a granularity called the

>   cache line width.  In order for memory mapped by this API to

>   operate correctly, the mapped region must begin exactly on a cache

>   line boundary and end exactly on one (to prevent two separately

>   mapped regions from sharing a single cache line).

> 

> To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.

> nand_scan_tail() can give a separate buffer for each of ecccalc,

> ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned

> with ARCH_DMA_MINALIGN.


Maybe I'm wrong, but AFAIU, kmalloc&co do not guarantee cache line
alignment for small buffers (< cache line), so even kmalloc can return
a buffer that is not cache line aligned.
This being said, we should be good because most NAND controllers are
only manipulating the page buffer (->databuf) which is large enough to
be cache line aligned.

Anyway, I'm not discussing the need for this patch, just the reasons we
need it ;-).

To me, it's more that we want to support as many cases as possible, no
matter the DMA controller requirements, and allocating each buffer
independently allows us to do that for almost no overhead.

How about simplifying the commit message to only focus on what this
patch is really fixing/improving?

"
Some NAND controllers are using DMA engine requiring a specific buffer
alignment. The core provides no guarantee on the nand_buffers pointers,
which forces some drivers to allocate their own buffers and pass the
NAND_OWN_BUFFERS flag.

Rework the nand_buffers allocation logic to allocate each buffer
independently. This should make most NAND controllers/DMA engine
happy, and allow us to get rid of these custom buf allocation in NAND
controller drivers.
"

> This is enough for most drivers because

> it is rare that DMA engines require larger alignment than CPU's

> cache line.

> 

> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>

> ---

> 

> Changes in v2:

>   - Newly added

> 

>  drivers/mtd/nand/nand_base.c | 34 +++++++++++++++++++++++++++-------

>  1 file changed, 27 insertions(+), 7 deletions(-)

> 

> diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c

> index e13f959..6fc0422 100644

> --- a/drivers/mtd/nand/nand_base.c

> +++ b/drivers/mtd/nand/nand_base.c

> @@ -4614,13 +4614,25 @@ int nand_scan_tail(struct mtd_info *mtd)

>  	}

>  

>  	if (!(chip->options & NAND_OWN_BUFFERS)) {

> -		nbuf = kzalloc(sizeof(*nbuf) + mtd->writesize

> -				+ mtd->oobsize * 3, GFP_KERNEL);

> +		nbuf = kzalloc(sizeof(*nbuf), GFP_KERNEL);

>  		if (!nbuf)

>  			return -ENOMEM;

> -		nbuf->ecccalc = (uint8_t *)(nbuf + 1);

> -		nbuf->ecccode = nbuf->ecccalc + mtd->oobsize;

> -		nbuf->databuf = nbuf->ecccode + mtd->oobsize;

> +		nbuf->ecccalc = kmalloc(mtd->oobsize, GFP_KERNEL);

> +		if (!nbuf->ecccalc) {

> +			ret = -EINVAL;

> +			goto err_free;

> +		}

> +		nbuf->ecccode = kmalloc(mtd->oobsize, GFP_KERNEL);

> +		if (!nbuf->ecccode) {

> +			ret = -EINVAL;

> +			goto err_free;

> +		}

> +		nbuf->databuf = kmalloc(mtd->writesize + mtd->oobsize,

> +					GFP_KERNEL);

> +		if (!nbuf->databuf) {

> +			ret = -EINVAL;

> +			goto err_free;

> +		}

>  

>  		chip->buffers = nbuf;

>  	} else {

> @@ -4863,8 +4875,12 @@ int nand_scan_tail(struct mtd_info *mtd)

>  	/* Build bad block table */

>  	return chip->scan_bbt(mtd);

>  err_free:

> -	if (!(chip->options & NAND_OWN_BUFFERS))

> +	if (!(chip->options & NAND_OWN_BUFFERS)) {

> +		kfree(chip->buffers->databuf);

> +		kfree(chip->buffers->ecccode);

> +		kfree(chip->buffers->ecccalc);

>  		kfree(chip->buffers);

> +	}

>  	return ret;

>  }

>  EXPORT_SYMBOL(nand_scan_tail);

> @@ -4915,8 +4931,12 @@ void nand_cleanup(struct nand_chip *chip)

>  

>  	/* Free bad block table memory */

>  	kfree(chip->bbt);

> -	if (!(chip->options & NAND_OWN_BUFFERS))

> +	if (!(chip->options & NAND_OWN_BUFFERS)) {

> +		kfree(chip->buffers->databuf);

> +		kfree(chip->buffers->ecccode);

> +		kfree(chip->buffers->ecccalc);

>  		kfree(chip->buffers);

> +	}

>  

>  	/* Free bad block descriptor memory */

>  	if (chip->badblock_pattern && chip->badblock_pattern->options



______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Masahiro Yamada March 28, 2017, 1:13 a.m. UTC | #2
Hi Boris,


> -----Original Message-----

> From: Boris Brezillon [mailto:boris.brezillon@free-electrons.com]

> Sent: Monday, March 27, 2017 5:01 PM

> To: Yamada, Masahiro/山田 真弘 <yamada.masahiro@socionext.com>

> Cc: linux-mtd@lists.infradead.org; David Woodhouse

> <dwmw2@infradead.org>; Marek Vasut <marek.vasut@gmail.com>; Brian Norris

> <computersforpeace@gmail.com>; thorsten.christiansson@idquantique.com;

> laurent.monat@idquantique.com; Dinh Nguyen <dinguyen@kernel.org>; Artem

> Bityutskiy <artem.bityutskiy@linux.intel.com>; Graham Moore

> <grmoore@opensource.altera.com>; Enrico Jorns <ejo@pengutronix.de>;

> Chuanxiao Dong <chuanxiao.dong@intel.com>; Masami Hiramatsu

> <mhiramat@kernel.org>; Jassi Brar <jaswinder.singh@linaro.org>; Rob

> Herring <robh@kernel.org>

> Subject: Re: [RESEND PATCH v2 50/53] mtd: nand: allocate aligned buffers

> if NAND_OWN_BUFFERS is unset

> 

> Hi Masahiro,

> 

> On Thu, 23 Mar 2017 09:17:59 +0900

> Masahiro Yamada <yamada.masahiro@socionext.com> wrote:

> 

> > Commit 66507c7bc889 ("mtd: nand: Add support to use nand_base poi

> > databuf as bounce buffer") fixed the issue that drivers can be

> > passed with a kmap()'d buffer.  This addressed the use_bufpoi = 0

> > case.

> >

> > When use_bufpoi = 1, chip->buffers->databuf is used.  The databuf

> > allocated by nand_scan_tail() is not suitable for DMA due to the

> > offset, sizeof(*chip->buffers).

> 

> As said earlier, I'm fine with the patch content, but I'm not sure

> about your explanation. Alignment requirements are DMA controller

> dependent so you're not exactly fixing a bug for all NAND controller

> drivers, just those that require >4 bytes alignment.



We have two contexts when we talk about alignment for DMA:

[A] Requirement by CPU architecture  (cache alignment)
[B] Requirement by the controller's DMA engine


As I will state below, having sizeof(*chip->buffers) in the same cache
line is no good.  This is the same reason as devm_* is not recommended for DMA.
(https://lkml.org/lkml/2017/3/8/693)


The current situation violates [A].

Usually [B] is less than [A].
So, if we meet [A] (by using kmalloc), [B] will be met as well.



> Regarding the cache line alignment issue, in this case it shouldn't be

> a problem, because the driver and the core shouldn't touch the

> chip->buffers object during a DMA transfer, so dma_map/unmap_single()

> should work fine (they may flush/invalidate one cache line entry that

> contains non-payload data, but that's not a problem as long as nothing

> is modified during the DMA transfer).



This is related to 52/53:
http://patchwork.ozlabs.org/patch/742409/

Can you also read this?
https://lkml.org/lkml/2017/3/8/693

Your comment is very similar to what was discussed in the thread.



> >

> > So, drivers using DMA are very likely to end up with setting the

> > NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers

> > tend to use devm_k*alloc to simplify the probe failure path, but

> > devm_k*alloc() does not return a cache line aligned buffer.

> 

> Oh, I didn't know that. I had a closer look at the code, and indeed,

> devm_kmalloc() does not guarantee any alignment.

> 

> >

> > If used, it could violate the DMA-API requirement stated in

> > Documentation/DMA-API.txt:

> >   Warnings:  Memory coherency operates at a granularity called the

> >   cache line width.  In order for memory mapped by this API to

> >   operate correctly, the mapped region must begin exactly on a cache

> >   line boundary and end exactly on one (to prevent two separately

> >   mapped regions from sharing a single cache line).

> >

> > To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.

> > nand_scan_tail() can give a separate buffer for each of ecccalc,

> > ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned

> > with ARCH_DMA_MINALIGN.

> 

> Maybe I'm wrong, but AFAIU, kmalloc&co do not guarantee cache line

> alignment for small buffers (< cache line), so even kmalloc can return

> a buffer that is not cache line aligned.

> This being said, we should be good because most NAND controllers are

> only manipulating the page buffer (->databuf) which is large enough to

> be cache line aligned.



In my understanding kmalloc() returns cache aligned address even for 1 byte memory.

Can you read the following part of Documentation/DMA-API-HOWTO.txt?

------------------------------------>8----------------------------------------
2) ARCH_DMA_MINALIGN

   Architectures must ensure that kmalloc'ed buffer is
   DMA-safe. Drivers and subsystems depend on it. If an architecture
   isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
   the CPU cache is identical to data in main memory),
   ARCH_DMA_MINALIGN must be set so that the memory allocator
   makes sure that kmalloc'ed buffer doesn't share a cache line with
   the others. See arch/arm/include/asm/cache.h as an example.

   Note that ARCH_DMA_MINALIGN is about DMA memory alignment
   constraints. You don't need to worry about the architecture data
   alignment constraints (e.g. the alignment constraints about 64-bit
   objects).
------------------------------------>8----------------------------------------






> Anyway, I'm not discussing the need for this patch, just the reasons we

> need it ;-).

> 

> To me, it's more that we want to support as many cases as possible, no

> matter the DMA controller requirements, and allocating each buffer

> independently allows us to do that for almost no overhead.

> 

> How about simplifying the commit message to only focus on what this

> patch is really fixing/improving?



I am OK to reword the git-log,
but can you read the references I suggested first?

Please let me know if you suspicious something.

Masahiro





> "

> Some NAND controllers are using DMA engine requiring a specific buffer

> alignment. The core provides no guarantee on the nand_buffers pointers,

> which forces some drivers to allocate their own buffers and pass the

> NAND_OWN_BUFFERS flag.

> 

> Rework the nand_buffers allocation logic to allocate each buffer

> independently. This should make most NAND controllers/DMA engine

> happy, and allow us to get rid of these custom buf allocation in NAND

> controller drivers.

> "

> 

> > This is enough for most drivers because

> > it is rare that DMA engines require larger alignment than CPU's

> > cache line.

> >

> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>

> > ---

> >

> > Changes in v2:

> >   - Newly added

> >

> >  drivers/mtd/nand/nand_base.c | 34 +++++++++++++++++++++++++++-------

> >  1 file changed, 27 insertions(+), 7 deletions(-)

> >

> > diff --git a/drivers/mtd/nand/nand_base.c

> b/drivers/mtd/nand/nand_base.c

> > index e13f959..6fc0422 100644

> > --- a/drivers/mtd/nand/nand_base.c

> > +++ b/drivers/mtd/nand/nand_base.c

> > @@ -4614,13 +4614,25 @@ int nand_scan_tail(struct mtd_info *mtd)

> >  	}

> >

> >  	if (!(chip->options & NAND_OWN_BUFFERS)) {

> > -		nbuf = kzalloc(sizeof(*nbuf) + mtd->writesize

> > -				+ mtd->oobsize * 3, GFP_KERNEL);

> > +		nbuf = kzalloc(sizeof(*nbuf), GFP_KERNEL);

> >  		if (!nbuf)

> >  			return -ENOMEM;

> > -		nbuf->ecccalc = (uint8_t *)(nbuf + 1);

> > -		nbuf->ecccode = nbuf->ecccalc + mtd->oobsize;

> > -		nbuf->databuf = nbuf->ecccode + mtd->oobsize;

> > +		nbuf->ecccalc = kmalloc(mtd->oobsize, GFP_KERNEL);

> > +		if (!nbuf->ecccalc) {

> > +			ret = -EINVAL;

> > +			goto err_free;

> > +		}

> > +		nbuf->ecccode = kmalloc(mtd->oobsize, GFP_KERNEL);

> > +		if (!nbuf->ecccode) {

> > +			ret = -EINVAL;

> > +			goto err_free;

> > +		}

> > +		nbuf->databuf = kmalloc(mtd->writesize + mtd->oobsize,

> > +					GFP_KERNEL);

> > +		if (!nbuf->databuf) {

> > +			ret = -EINVAL;

> > +			goto err_free;

> > +		}

> >

> >  		chip->buffers = nbuf;

> >  	} else {

> > @@ -4863,8 +4875,12 @@ int nand_scan_tail(struct mtd_info *mtd)

> >  	/* Build bad block table */

> >  	return chip->scan_bbt(mtd);

> >  err_free:

> > -	if (!(chip->options & NAND_OWN_BUFFERS))

> > +	if (!(chip->options & NAND_OWN_BUFFERS)) {

> > +		kfree(chip->buffers->databuf);

> > +		kfree(chip->buffers->ecccode);

> > +		kfree(chip->buffers->ecccalc);

> >  		kfree(chip->buffers);

> > +	}

> >  	return ret;

> >  }

> >  EXPORT_SYMBOL(nand_scan_tail);

> > @@ -4915,8 +4931,12 @@ void nand_cleanup(struct nand_chip *chip)

> >

> >  	/* Free bad block table memory */

> >  	kfree(chip->bbt);

> > -	if (!(chip->options & NAND_OWN_BUFFERS))

> > +	if (!(chip->options & NAND_OWN_BUFFERS)) {

> > +		kfree(chip->buffers->databuf);

> > +		kfree(chip->buffers->ecccode);

> > +		kfree(chip->buffers->ecccalc);

> >  		kfree(chip->buffers);

> > +	}

> >

> >  	/* Free bad block descriptor memory */

> >  	if (chip->badblock_pattern && chip->badblock_pattern->options


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Boris Brezillon March 28, 2017, 7:59 a.m. UTC | #3
+Russell to correct me if I'm wrong or give further information.

On Tue, 28 Mar 2017 01:13:10 +0000
<yamada.masahiro@socionext.com> wrote:

> Hi Boris,
> 
> 
> > -----Original Message-----
> > From: Boris Brezillon [mailto:boris.brezillon@free-electrons.com]
> > Sent: Monday, March 27, 2017 5:01 PM
> > To: Yamada, Masahiro/山田 真弘 <yamada.masahiro@socionext.com>
> > Cc: linux-mtd@lists.infradead.org; David Woodhouse
> > <dwmw2@infradead.org>; Marek Vasut <marek.vasut@gmail.com>; Brian Norris
> > <computersforpeace@gmail.com>; thorsten.christiansson@idquantique.com;
> > laurent.monat@idquantique.com; Dinh Nguyen <dinguyen@kernel.org>; Artem
> > Bityutskiy <artem.bityutskiy@linux.intel.com>; Graham Moore
> > <grmoore@opensource.altera.com>; Enrico Jorns <ejo@pengutronix.de>;
> > Chuanxiao Dong <chuanxiao.dong@intel.com>; Masami Hiramatsu
> > <mhiramat@kernel.org>; Jassi Brar <jaswinder.singh@linaro.org>; Rob
> > Herring <robh@kernel.org>
> > Subject: Re: [RESEND PATCH v2 50/53] mtd: nand: allocate aligned buffers
> > if NAND_OWN_BUFFERS is unset
> > 
> > Hi Masahiro,
> > 
> > On Thu, 23 Mar 2017 09:17:59 +0900
> > Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> >   
> > > Commit 66507c7bc889 ("mtd: nand: Add support to use nand_base poi
> > > databuf as bounce buffer") fixed the issue that drivers can be
> > > passed with a kmap()'d buffer.  This addressed the use_bufpoi = 0
> > > case.
> > >
> > > When use_bufpoi = 1, chip->buffers->databuf is used.  The databuf
> > > allocated by nand_scan_tail() is not suitable for DMA due to the
> > > offset, sizeof(*chip->buffers).  
> > 
> > As said earlier, I'm fine with the patch content, but I'm not sure
> > about your explanation. Alignment requirements are DMA controller
> > dependent so you're not exactly fixing a bug for all NAND controller
> > drivers, just those that require >4 bytes alignment.  
> 
> 
> We have two contexts when we talk about alignment for DMA:
> 
> [A] Requirement by CPU architecture  (cache alignment)
> [B] Requirement by the controller's DMA engine
> 
> 
> As I will state below, having sizeof(*chip->buffers) in the same cache
> line is no good.  This is the same reason as devm_* is not recommended for DMA.
> (https://lkml.org/lkml/2017/3/8/693)

Having non-cache line aligned buffers is definitely more dangerous,
but, AFAIU, it's not impossible.

Let's consider this case:


|        cache line        |        cache line        | ... |
-------------------------------------------------------------
| nand_buffers size |                    data               |


If you call dma_map_single(dev, data, size, DMA_TO_DEVICE), the first
cache line will be flushed (content written back to memory), and
assuming you don't touch nand_buffers content between dma_map_single()
and dma_unmap_single() you shouldn't have any problem (the cache line
cannot magically turn dirty and thus cannot be flushed in the
background).

For the DMA_FROM_DEVICE direction, the cache line is invalidated when
dma_unmap_single() is called, which means your nand_buffers content
might be updated with what is present in SDRAM, but it shouldn't have
changed since nand_buffers is only touched at initialization time (when
the buffer is created).

So, for our use case where nand_buffers is never modified between
dma_map_single() and dma_unmap_single(), it should be safe to have
non-cache line aligned buffers.

Russell, please let me know if my reasoning is incorrect.
Note that I'm not arguing against the new approach where we allocate
each buffer independently using kmalloc (having cache line aligned
buffers is definitely safer and does not imply any significant
overhead), I just want to make sure I understand things correctly.

> 
> 
> The current situation violates [A].

Do you have a real failure that is proven to be caused by mis cache
line alignment, or are you just speculating?

> 
> Usually [B] is less than [A].

Yep, it's likely the case.

> So, if we meet [A] (by using kmalloc), [B] will be met as well.

Sure.

> 
> 
> 
> > Regarding the cache line alignment issue, in this case it shouldn't be
> > a problem, because the driver and the core shouldn't touch the
> > chip->buffers object during a DMA transfer, so dma_map/unmap_single()
> > should work fine (they may flush/invalidate one cache line entry that
> > contains non-payload data, but that's not a problem as long as nothing
> > is modified during the DMA transfer).  
> 
> 
> This is related to 52/53:
> http://patchwork.ozlabs.org/patch/742409/
> 
> Can you also read this?
> https://lkml.org/lkml/2017/3/8/693
> 
> Your comment is very similar to what was discussed in the thread.

I read it.

> 
> 
> 
> > >
> > > So, drivers using DMA are very likely to end up with setting the
> > > NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers
> > > tend to use devm_k*alloc to simplify the probe failure path, but
> > > devm_k*alloc() does not return a cache line aligned buffer.  
> > 
> > Oh, I didn't know that. I had a closer look at the code, and indeed,
> > devm_kmalloc() does not guarantee any alignment.
> >   
> > >
> > > If used, it could violate the DMA-API requirement stated in
> > > Documentation/DMA-API.txt:
> > >   Warnings:  Memory coherency operates at a granularity called the
> > >   cache line width.  In order for memory mapped by this API to
> > >   operate correctly, the mapped region must begin exactly on a cache
> > >   line boundary and end exactly on one (to prevent two separately
> > >   mapped regions from sharing a single cache line).
> > >
> > > To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.
> > > nand_scan_tail() can give a separate buffer for each of ecccalc,
> > > ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned
> > > with ARCH_DMA_MINALIGN.  
> > 
> > Maybe I'm wrong, but AFAIU, kmalloc&co do not guarantee cache line
> > alignment for small buffers (< cache line), so even kmalloc can return
> > a buffer that is not cache line aligned.
> > This being said, we should be good because most NAND controllers are
> > only manipulating the page buffer (->databuf) which is large enough to
> > be cache line aligned.  
> 
> 
> In my understanding kmalloc() returns cache aligned address even for 1 byte memory.

After digging into the SLAB code I found the calculate_alignment()
function [1] which is used to calculate the required alignment of
objects provided by a SLAB cache. For kmalloc caches SLAB_HWCACHE_ALIGN
is set, but if you look at the code, if the object size is smaller
than half a cache line the alignment constraint is relaxed, meaning that
elements smaller than cache_line/2 are not guaranteed to be aligned on
a cache line.

I didn't test, but that should be pretty easy to verify (kmalloc a
bunch of 1byte bufs and check their alignment).

[1]http://lxr.free-electrons.com/source/mm/slab_common.c#L301
Boris Brezillon March 28, 2017, 8:07 a.m. UTC | #4
On Tue, 28 Mar 2017 09:59:07 +0200
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> +Russell to correct me if I'm wrong or give further information.
> 
> On Tue, 28 Mar 2017 01:13:10 +0000
> <yamada.masahiro@socionext.com> wrote:
> 
> > Hi Boris,
> > 
> >   
> > > -----Original Message-----
> > > From: Boris Brezillon [mailto:boris.brezillon@free-electrons.com]
> > > Sent: Monday, March 27, 2017 5:01 PM
> > > To: Yamada, Masahiro/山田 真弘 <yamada.masahiro@socionext.com>
> > > Cc: linux-mtd@lists.infradead.org; David Woodhouse
> > > <dwmw2@infradead.org>; Marek Vasut <marek.vasut@gmail.com>; Brian Norris
> > > <computersforpeace@gmail.com>; thorsten.christiansson@idquantique.com;
> > > laurent.monat@idquantique.com; Dinh Nguyen <dinguyen@kernel.org>; Artem
> > > Bityutskiy <artem.bityutskiy@linux.intel.com>; Graham Moore
> > > <grmoore@opensource.altera.com>; Enrico Jorns <ejo@pengutronix.de>;
> > > Chuanxiao Dong <chuanxiao.dong@intel.com>; Masami Hiramatsu
> > > <mhiramat@kernel.org>; Jassi Brar <jaswinder.singh@linaro.org>; Rob
> > > Herring <robh@kernel.org>
> > > Subject: Re: [RESEND PATCH v2 50/53] mtd: nand: allocate aligned buffers
> > > if NAND_OWN_BUFFERS is unset
> > > 
> > > Hi Masahiro,
> > > 
> > > On Thu, 23 Mar 2017 09:17:59 +0900
> > > Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> > >     
> > > > Commit 66507c7bc889 ("mtd: nand: Add support to use nand_base poi
> > > > databuf as bounce buffer") fixed the issue that drivers can be
> > > > passed with a kmap()'d buffer.  This addressed the use_bufpoi = 0
> > > > case.
> > > >
> > > > When use_bufpoi = 1, chip->buffers->databuf is used.  The databuf
> > > > allocated by nand_scan_tail() is not suitable for DMA due to the
> > > > offset, sizeof(*chip->buffers).    
> > > 
> > > As said earlier, I'm fine with the patch content, but I'm not sure
> > > about your explanation. Alignment requirements are DMA controller
> > > dependent so you're not exactly fixing a bug for all NAND controller
> > > drivers, just those that require >4 bytes alignment.    
> > 
> > 
> > We have two contexts when we talk about alignment for DMA:
> > 
> > [A] Requirement by CPU architecture  (cache alignment)
> > [B] Requirement by the controller's DMA engine
> > 
> > 
> > As I will state below, having sizeof(*chip->buffers) in the same cache
> > line is no good.  This is the same reason as devm_* is not recommended for DMA.
> > (https://lkml.org/lkml/2017/3/8/693)  
> 
> Having non-cache line aligned buffers is definitely more dangerous,
> but, AFAIU, it's not impossible.
> 
> Let's consider this case:
> 
> 
> |        cache line        |        cache line        | ... |
> -------------------------------------------------------------
> | nand_buffers size |                    data               |
> 
> 
> If you call dma_map_single(dev, data, size, DMA_TO_DEVICE), the first
> cache line will be flushed (content written back to memory), and
> assuming you don't touch nand_buffers content between dma_map_single()
> and dma_unmap_single() you shouldn't have any problem (the cache line
> cannot magically turn dirty and thus cannot be flushed in the
> background).
> 
> For the DMA_FROM_DEVICE direction, the cache line is invalidated when
> dma_unmap_single() is called, which means your nand_buffers content
> might be updated with what is present in SDRAM, but it shouldn't have
> changed since nand_buffers is only touched at initialization time (when
> the buffer is created).
> 
> So, for our use case where nand_buffers is never modified between
> dma_map_single() and dma_unmap_single(), it should be safe to have
> non-cache line aligned buffers.
> 
> Russell, please let me know if my reasoning is incorrect.
> Note that I'm not arguing against the new approach where we allocate
> each buffer independently using kmalloc (having cache line aligned
> buffers is definitely safer and does not imply any significant
> overhead), I just want to make sure I understand things correctly.
> 
> > 
> > 
> > The current situation violates [A].  
> 
> Do you have a real failure that is proven to be caused by mis cache
> line alignment, or are you just speculating?
> 
> > 
> > Usually [B] is less than [A].  
> 
> Yep, it's likely the case.
> 
> > So, if we meet [A] (by using kmalloc), [B] will be met as well.  
> 
> Sure.
> 
> > 
> > 
> >   
> > > Regarding the cache line alignment issue, in this case it shouldn't be
> > > a problem, because the driver and the core shouldn't touch the
> > > chip->buffers object during a DMA transfer, so dma_map/unmap_single()
> > > should work fine (they may flush/invalidate one cache line entry that
> > > contains non-payload data, but that's not a problem as long as nothing
> > > is modified during the DMA transfer).    
> > 
> > 
> > This is related to 52/53:
> > http://patchwork.ozlabs.org/patch/742409/
> > 
> > Can you also read this?
> > https://lkml.org/lkml/2017/3/8/693
> > 
> > Your comment is very similar to what was discussed in the thread.  
> 
> I read it.
> 
> > 
> > 
> >   
> > > >
> > > > So, drivers using DMA are very likely to end up with setting the
> > > > NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers
> > > > tend to use devm_k*alloc to simplify the probe failure path, but
> > > > devm_k*alloc() does not return a cache line aligned buffer.    
> > > 
> > > Oh, I didn't know that. I had a closer look at the code, and indeed,
> > > devm_kmalloc() does not guarantee any alignment.
> > >     
> > > >
> > > > If used, it could violate the DMA-API requirement stated in
> > > > Documentation/DMA-API.txt:
> > > >   Warnings:  Memory coherency operates at a granularity called the
> > > >   cache line width.  In order for memory mapped by this API to
> > > >   operate correctly, the mapped region must begin exactly on a cache
> > > >   line boundary and end exactly on one (to prevent two separately
> > > >   mapped regions from sharing a single cache line).
> > > >
> > > > To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.
> > > > nand_scan_tail() can give a separate buffer for each of ecccalc,
> > > > ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned
> > > > with ARCH_DMA_MINALIGN.    
> > > 
> > > Maybe I'm wrong, but AFAIU, kmalloc&co do not guarantee cache line
> > > alignment for small buffers (< cache line), so even kmalloc can return
> > > a buffer that is not cache line aligned.
> > > This being said, we should be good because most NAND controllers are
> > > only manipulating the page buffer (->databuf) which is large enough to
> > > be cache line aligned.    
> > 
> > 
> > In my understanding kmalloc() returns cache aligned address even for 1 byte memory.  
> 
> After digging into the SLAB code I found the calculate_alignment()
> function [1] which is used to calculate the required alignment of
> objects provided by a SLAB cache. For kmalloc caches SLAB_HWCACHE_ALIGN
> is set, but if you look at the code, if the object size is smaller
> than half a cache line the alignment constraint is relaxed, meaning that
> elements smaller than cache_line/2 are not guaranteed to be aligned on
> a cache line.

Hm, it seems that alignment for kmalloc SLAB caches is set to
ARCH_KMALLOC_MINALIGN which is set to ARCH_DMA_MINALIGN by default and
ARCH_DMA_MINALIGN is usually equal to the L1 cache line size, so I was
wrong here.
Russell King (Oracle) March 28, 2017, 10:17 a.m. UTC | #5
On Tue, Mar 28, 2017 at 09:59:07AM +0200, Boris Brezillon wrote:
> Having non-cache line aligned buffers is definitely more dangerous,

> but, AFAIU, it's not impossible.

> 

> Let's consider this case:

> 

> 

> |        cache line        |        cache line        | ... |

> -------------------------------------------------------------

> | nand_buffers size |                    data               |

> 

> 

> If you call dma_map_single(dev, data, size, DMA_TO_DEVICE), the first

> cache line will be flushed (content written back to memory), and

> assuming you don't touch nand_buffers content between dma_map_single()

> and dma_unmap_single() you shouldn't have any problem (the cache line

> cannot magically turn dirty and thus cannot be flushed in the

> background).


In the DMA_TO_DEVICE case, you're not going to be modifying the data
to be DMA'd.  The DMA certainly is not going to modify the data it's
supposed to be reading.

So, reality is that reading and writing the "data" part including the
overlapping cache line should cause no problem to the DMA activity,
even if that cache line gets written back - the part that overlaps
the DMA data should _not_ modify that data.

More of an issue is the DMA_FROM_DEVICE case...

> For the DMA_FROM_DEVICE direction, the cache line is invalidated when

> dma_unmap_single() is called, which means your nand_buffers content

> might be updated with what is present in SDRAM, but it shouldn't have

> changed since nand_buffers is only touched at initialization time (when

> the buffer is created).


This is exactly where it matters.  When mapping for DMA from the device,
we obviously have to ensure that we aren't going to have any writebacks
from the cache into the DMA area.  Since we don't know whether the
overlapping cache lines contain important data, we write those back, but
invalidate the rest of the buffer when mapping it.

Reading from those cache lines while DMA is in progress is pretty benign,
just like the DMA_TO_DEVICE case.  However, writing to those cache lines
while DMA is in progress is disasterous, because we end up with a choice:

1. if we invalidate the overlapping cache lines, we lose updates that
   the CPU has made.

2. if we write-back the overlapping cache lines, we lose updates that
   the DMA has made.

So either way, there is a data loss risk - there's no getting away from
that.  I've chosen to implement (2) in the ARM code, but either is
equally valid.  (I note in your description above that you think (1)
applies...)

The only solution to that is to avoid all writes to these cache lines
while DMA from the device is in progress.

> So, for our use case where nand_buffers is never modified between

> dma_map_single() and dma_unmap_single(), it should be safe to have

> non-cache line aligned buffers.


Correct, with the exception of what happens at unmap.

> Russell, please let me know if my reasoning is incorrect.

> Note that I'm not arguing against the new approach where we allocate

> each buffer independently using kmalloc (having cache line aligned

> buffers is definitely safer and does not imply any significant

> overhead), I just want to make sure I understand things correctly.


Cache line aligned buffers is definitely preferable, but at the arch
level it's not something that can be relied upon.  Historically, if
I'd have chosen (1) then various drivers would have broken (like SCSI
drivers DMAing sense data directly into the scsi_cmnd struct, something
that has now been resolved.)

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Russell King (Oracle) March 28, 2017, 10:22 a.m. UTC | #6
On Tue, Mar 28, 2017 at 10:07:21AM +0200, Boris Brezillon wrote:
> Hm, it seems that alignment for kmalloc SLAB caches is set to

> ARCH_KMALLOC_MINALIGN which is set to ARCH_DMA_MINALIGN by default and

> ARCH_DMA_MINALIGN is usually equal to the L1 cache line size, so I was

> wrong here.


Yes, we set kmalloc() up to always return L1 cache line aligned data,
where "L1 cache line aligned" is the _maximum_ cache line alignment
that the kernel has been built for, not the cache line alignment of
the CPU we're running on.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Boris Brezillon March 28, 2017, 12:13 p.m. UTC | #7
On Tue, 28 Mar 2017 11:17:20 +0100
Russell King - ARM Linux <linux@armlinux.org.uk> wrote:

> On Tue, Mar 28, 2017 at 09:59:07AM +0200, Boris Brezillon wrote:

> > Having non-cache line aligned buffers is definitely more dangerous,

> > but, AFAIU, it's not impossible.

> > 

> > Let's consider this case:

> > 

> > 

> > |        cache line        |        cache line        | ... |

> > -------------------------------------------------------------

> > | nand_buffers size |                    data               |

> > 

> > 

> > If you call dma_map_single(dev, data, size, DMA_TO_DEVICE), the first

> > cache line will be flushed (content written back to memory), and

> > assuming you don't touch nand_buffers content between dma_map_single()

> > and dma_unmap_single() you shouldn't have any problem (the cache line

> > cannot magically turn dirty and thus cannot be flushed in the

> > background).  

> 

> In the DMA_TO_DEVICE case, you're not going to be modifying the data

> to be DMA'd.  The DMA certainly is not going to modify the data it's

> supposed to be reading.

> 

> So, reality is that reading and writing the "data" part including the

> overlapping cache line should cause no problem to the DMA activity,

> even if that cache line gets written back - the part that overlaps

> the DMA data should _not_ modify that data.

> 

> More of an issue is the DMA_FROM_DEVICE case...

> 

> > For the DMA_FROM_DEVICE direction, the cache line is invalidated when

> > dma_unmap_single() is called, which means your nand_buffers content

> > might be updated with what is present in SDRAM, but it shouldn't have

> > changed since nand_buffers is only touched at initialization time (when

> > the buffer is created).  

> 

> This is exactly where it matters.  When mapping for DMA from the device,

> we obviously have to ensure that we aren't going to have any writebacks

> from the cache into the DMA area.  Since we don't know whether the

> overlapping cache lines contain important data, we write those back, but

> invalidate the rest of the buffer when mapping it.

> 

> Reading from those cache lines while DMA is in progress is pretty benign,

> just like the DMA_TO_DEVICE case.  However, writing to those cache lines

> while DMA is in progress is disasterous, because we end up with a choice:

> 

> 1. if we invalidate the overlapping cache lines, we lose updates that

>    the CPU has made.

> 

> 2. if we write-back the overlapping cache lines, we lose updates that

>    the DMA has made.

> 

> So either way, there is a data loss risk - there's no getting away from

> that.  I've chosen to implement (2) in the ARM code, but either is

> equally valid.  (I note in your description above that you think (1)

> applies...)


Okay, got it.

> 

> The only solution to that is to avoid all writes to these cache lines

> while DMA from the device is in progress.


And we are in that case: the nand_buffers object will never be modified
between dma_map() and dma_unmap().

> 

> > So, for our use case where nand_buffers is never modified between

> > dma_map_single() and dma_unmap_single(), it should be safe to have

> > non-cache line aligned buffers.  

> 

> Correct, with the exception of what happens at unmap.


Now I'm lost again :-). Didn't you say it was safe to have overlapping
cache lines if nothing writes to these cache lines during the whole time
the buffer is DMA-mapped?
IIUC, the only case where unmap() will write-back cache lines is when
these cache entries are dirty (i.e. when they've been modified through
CPU accesses between map and unmap). Am I missing something?

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Masahiro Yamada March 29, 2017, 3:22 a.m. UTC | #8
Hi Boris

> -----Original Message-----

> > The current situation violates [A].

> 

> Do you have a real failure that is proven to be caused by mis cache

> line alignment, or are you just speculating?


No real problem.
Rather, I am following the line 226 of DMA-API.txt
"Warnings: Memory coherency ...".


In my case, the cache line is 64 byte (for ARM),
But this does not really matter for the reason
nand_buffers is only touched at initialization as you mentioned.


[B] is a real problem for me
because Denali DMA engine requires >4byte alignment.




So, I can replace the git-log in the next version:

"
Some NAND controllers are using DMA engine requiring a specific buffer
alignment. The core provides no guarantee on the nand_buffers pointers,
which forces some drivers to allocate their own buffers and pass the
NAND_OWN_BUFFERS flag.

Rework the nand_buffers allocation logic to allocate each buffer
independently. This should make most NAND controllers/DMA engine
happy, and allow us to get rid of these custom buf allocation in NAND
controller drivers.
"

Masahiro













> >

> > Usually [B] is less than [A].

> 

> Yep, it's likely the case.

> 

> > So, if we meet [A] (by using kmalloc), [B] will be met as well.

> 

> Sure.

> 

> >

> >

> >

> > > Regarding the cache line alignment issue, in this case it shouldn't

> be

> > > a problem, because the driver and the core shouldn't touch the

> > > chip->buffers object during a DMA transfer, so dma_map/unmap_single()

> > > should work fine (they may flush/invalidate one cache line entry that

> > > contains non-payload data, but that's not a problem as long as nothing

> > > is modified during the DMA transfer).

> >

> >

> > This is related to 52/53:

> > http://patchwork.ozlabs.org/patch/742409/

> >

> > Can you also read this?

> > https://lkml.org/lkml/2017/3/8/693

> >

> > Your comment is very similar to what was discussed in the thread.

> 

> I read it.

> 

> >

> >

> >

> > > >

> > > > So, drivers using DMA are very likely to end up with setting the

> > > > NAND_OWN_BUFFERS flag and allocate buffers by themselves.  Drivers

> > > > tend to use devm_k*alloc to simplify the probe failure path, but

> > > > devm_k*alloc() does not return a cache line aligned buffer.

> > >

> > > Oh, I didn't know that. I had a closer look at the code, and indeed,

> > > devm_kmalloc() does not guarantee any alignment.

> > >

> > > >

> > > > If used, it could violate the DMA-API requirement stated in

> > > > Documentation/DMA-API.txt:

> > > >   Warnings:  Memory coherency operates at a granularity called the

> > > >   cache line width.  In order for memory mapped by this API to

> > > >   operate correctly, the mapped region must begin exactly on a cache

> > > >   line boundary and end exactly on one (to prevent two separately

> > > >   mapped regions from sharing a single cache line).

> > > >

> > > > To sum up, NAND_OWN_BUFFERS is not very convenient for drivers.

> > > > nand_scan_tail() can give a separate buffer for each of ecccalc,

> > > > ecccode, databuf.  It is guaranteed kmalloc'ed memory is aligned

> > > > with ARCH_DMA_MINALIGN.

> > >

> > > Maybe I'm wrong, but AFAIU, kmalloc&co do not guarantee cache line

> > > alignment for small buffers (< cache line), so even kmalloc can return

> > > a buffer that is not cache line aligned.

> > > This being said, we should be good because most NAND controllers are

> > > only manipulating the page buffer (->databuf) which is large enough

> to

> > > be cache line aligned.

> >

> >

> > In my understanding kmalloc() returns cache aligned address even for 1

> byte memory.

> 

> After digging into the SLAB code I found the calculate_alignment()

> function [1] which is used to calculate the required alignment of

> objects provided by a SLAB cache. For kmalloc caches SLAB_HWCACHE_ALIGN

> is set, but if you look at the code, if the object size is smaller

> than half a cache line the alignment constraint is relaxed, meaning that

> elements smaller than cache_line/2 are not guaranteed to be aligned on

> a cache line.

> 

> I didn't test, but that should be pretty easy to verify (kmalloc a

> bunch of 1byte bufs and check their alignment).

> 

> [1]http://lxr.free-electrons.com/source/mm/slab_common.c#L301

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
Boris Brezillon March 29, 2017, 7:03 a.m. UTC | #9
On Wed, 29 Mar 2017 03:22:01 +0000
<yamada.masahiro@socionext.com> wrote:

> Hi Boris

> 

> > -----Original Message-----  

> > > The current situation violates [A].  

> > 

> > Do you have a real failure that is proven to be caused by mis cache

> > line alignment, or are you just speculating?  

> 

> No real problem.

> Rather, I am following the line 226 of DMA-API.txt

> "Warnings: Memory coherency ...".

> 

> 

> In my case, the cache line is 64 byte (for ARM),

> But this does not really matter for the reason

> nand_buffers is only touched at initialization as you mentioned.

> 

> 

> [B] is a real problem for me

> because Denali DMA engine requires >4byte alignment.

> 

> 

> 

> 

> So, I can replace the git-log in the next version:

> 

> "

> Some NAND controllers are using DMA engine requiring a specific buffer

> alignment. The core provides no guarantee on the nand_buffers pointers,

> which forces some drivers to allocate their own buffers and pass the

> NAND_OWN_BUFFERS flag.

> 

> Rework the nand_buffers allocation logic to allocate each buffer

> independently. This should make most NAND controllers/DMA engine

> happy, and allow us to get rid of these custom buf allocation in NAND

> controller drivers.

> "


Yep, sounds better to me.

Thanks,

Boris

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/
diff mbox series

Patch

diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index e13f959..6fc0422 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -4614,13 +4614,25 @@  int nand_scan_tail(struct mtd_info *mtd)
 	}
 
 	if (!(chip->options & NAND_OWN_BUFFERS)) {
-		nbuf = kzalloc(sizeof(*nbuf) + mtd->writesize
-				+ mtd->oobsize * 3, GFP_KERNEL);
+		nbuf = kzalloc(sizeof(*nbuf), GFP_KERNEL);
 		if (!nbuf)
 			return -ENOMEM;
-		nbuf->ecccalc = (uint8_t *)(nbuf + 1);
-		nbuf->ecccode = nbuf->ecccalc + mtd->oobsize;
-		nbuf->databuf = nbuf->ecccode + mtd->oobsize;
+		nbuf->ecccalc = kmalloc(mtd->oobsize, GFP_KERNEL);
+		if (!nbuf->ecccalc) {
+			ret = -EINVAL;
+			goto err_free;
+		}
+		nbuf->ecccode = kmalloc(mtd->oobsize, GFP_KERNEL);
+		if (!nbuf->ecccode) {
+			ret = -EINVAL;
+			goto err_free;
+		}
+		nbuf->databuf = kmalloc(mtd->writesize + mtd->oobsize,
+					GFP_KERNEL);
+		if (!nbuf->databuf) {
+			ret = -EINVAL;
+			goto err_free;
+		}
 
 		chip->buffers = nbuf;
 	} else {
@@ -4863,8 +4875,12 @@  int nand_scan_tail(struct mtd_info *mtd)
 	/* Build bad block table */
 	return chip->scan_bbt(mtd);
 err_free:
-	if (!(chip->options & NAND_OWN_BUFFERS))
+	if (!(chip->options & NAND_OWN_BUFFERS)) {
+		kfree(chip->buffers->databuf);
+		kfree(chip->buffers->ecccode);
+		kfree(chip->buffers->ecccalc);
 		kfree(chip->buffers);
+	}
 	return ret;
 }
 EXPORT_SYMBOL(nand_scan_tail);
@@ -4915,8 +4931,12 @@  void nand_cleanup(struct nand_chip *chip)
 
 	/* Free bad block table memory */
 	kfree(chip->bbt);
-	if (!(chip->options & NAND_OWN_BUFFERS))
+	if (!(chip->options & NAND_OWN_BUFFERS)) {
+		kfree(chip->buffers->databuf);
+		kfree(chip->buffers->ecccode);
+		kfree(chip->buffers->ecccalc);
 		kfree(chip->buffers);
+	}
 
 	/* Free bad block descriptor memory */
 	if (chip->badblock_pattern && chip->badblock_pattern->options