diff mbox

Bug(?) in patch "arm64: Implement coherent DMA API based on swiotlb" (was Re: [GIT PULL] arm64 patches for 3.15)

Message ID 20140401172939.GG20061@arm.com
State New
Headers show

Commit Message

Catalin Marinas April 1, 2014, 5:29 p.m. UTC
On Tue, Apr 01, 2014 at 05:10:57PM +0100, Jon Medhurst (Tixy) wrote:
> On Mon, 2014-03-31 at 18:52 +0100, Catalin Marinas wrote:
> > The following changes since commit cfbf8d4857c26a8a307fb7cd258074c9dcd8c691:
> > 
> >   Linux 3.14-rc4 (2014-02-23 17:40:03 -0800)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux tags/arm64-upstream
> > 
> > for you to fetch changes up to 196adf2f3015eacac0567278ba538e3ffdd16d0e:
> > 
> >   arm64: Remove pgprot_dmacoherent() (2014-03-24 10:35:35 +0000)
> 
> I may have spotted a bug in commit 7363590d2c46 (arm64: Implement
> coherent DMA API based on swiotlb), see my inline comment below...
> 
> [...]
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 1ea9f26..97fcef5 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -166,3 +166,81 @@ ENTRY(__flush_dcache_area)
> >  	dsb	sy
> >  	ret
> >  ENDPROC(__flush_dcache_area)
> > +
> > +/*
> > + *	__dma_inv_range(start, end)
> > + *	- start   - virtual start address of region
> > + *	- end     - virtual end address of region
> > + */
> > +__dma_inv_range:
> > +	dcache_line_size x2, x3
> > +	sub	x3, x2, #1
> > +	bic	x0, x0, x3
> > +	bic	x1, x1, x3
> 
> Why is the 'end' value in x1 above rounded down to be cache aligned?
> This means the cache invalidate won't include the cache line containing
> the final bytes of the region, unless it happened to already be cache
> line aligned. This looks especially suspect as the other two cache
> operations added in the same patch (below) don't do that.

Cache invalidation is destructive, so we want to make sure that it
doesn't affect anything beyond x1. But you are right, if either end of
the buffer is not cache line aligned it can get it wrong. The fix is to
use clean+invalidate on the unaligned ends:

Comments

Jon Medhurst (Tixy) April 2, 2014, 8:52 a.m. UTC | #1
On Tue, 2014-04-01 at 18:29 +0100, Catalin Marinas wrote:
> On Tue, Apr 01, 2014 at 05:10:57PM +0100, Jon Medhurst (Tixy) wrote:
> > On Mon, 2014-03-31 at 18:52 +0100, Catalin Marinas wrote:
> > > The following changes since commit cfbf8d4857c26a8a307fb7cd258074c9dcd8c691:
> > > 
> > >   Linux 3.14-rc4 (2014-02-23 17:40:03 -0800)
> > > 
> > > are available in the git repository at:
> > > 
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux tags/arm64-upstream
> > > 
> > > for you to fetch changes up to 196adf2f3015eacac0567278ba538e3ffdd16d0e:
> > > 
> > >   arm64: Remove pgprot_dmacoherent() (2014-03-24 10:35:35 +0000)
> > 
> > I may have spotted a bug in commit 7363590d2c46 (arm64: Implement
> > coherent DMA API based on swiotlb), see my inline comment below...
> > 
> > [...]
> > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > > index 1ea9f26..97fcef5 100644
> > > --- a/arch/arm64/mm/cache.S
> > > +++ b/arch/arm64/mm/cache.S
> > > @@ -166,3 +166,81 @@ ENTRY(__flush_dcache_area)
> > >  	dsb	sy
> > >  	ret
> > >  ENDPROC(__flush_dcache_area)
> > > +
> > > +/*
> > > + *	__dma_inv_range(start, end)
> > > + *	- start   - virtual start address of region
> > > + *	- end     - virtual end address of region
> > > + */
> > > +__dma_inv_range:
> > > +	dcache_line_size x2, x3
> > > +	sub	x3, x2, #1
> > > +	bic	x0, x0, x3
> > > +	bic	x1, x1, x3
> > 
> > Why is the 'end' value in x1 above rounded down to be cache aligned?
> > This means the cache invalidate won't include the cache line containing
> > the final bytes of the region, unless it happened to already be cache
> > line aligned. This looks especially suspect as the other two cache
> > operations added in the same patch (below) don't do that.
> 
> Cache invalidation is destructive, so we want to make sure that it
> doesn't affect anything beyond x1. But you are right, if either end of
> the buffer is not cache line aligned it can get it wrong. The fix is to
> use clean+invalidate on the unaligned ends:

Like the ARMv7 implementation does :-) However, I wonder, is it possible
for the Cache Writeback Granule (CWG) to come into play? If the CWG of
further out caches was bigger than closer (to CPU) caches then it would
cause data corruption. So for these region ends, should we not be using
the CWG size, not the minimum D cache line size? On second thoughts,
that wouldn't be safe either in the converse case where the CWG of a
closer cache was bigger. So we would need to first use minimum cache
line size to clean a CWG sized region, then invalidate cache lines by
the same method. But then that leaves a time period where a write can
happen between the clean and the invalidate, again leading to data
corruption. I hope all this means I've either got rather confused or
that that cache architectures are smart enough to automatically cope. 

I also have a couple of comments on the specific changes below...

> 
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index c46f48b33c14..6a26bf1965d3 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -175,10 +175,17 @@ ENDPROC(__flush_dcache_area)
>  __dma_inv_range:
>  	dcache_line_size x2, x3
>  	sub	x3, x2, #1
> -	bic	x0, x0, x3
> +	tst	x1, x3				// end cache line aligned?
>  	bic	x1, x1, x3
> -1:	dc	ivac, x0			// invalidate D / U line
> -	add	x0, x0, x2
> +	b.eq	1f
> +	dc	civac, x1			// clean & invalidate D / U line

That is actually cleaning the address one byte past the end of the
region, not sure it matters though because it is still within the same
minimum cache line sized region.

> +1:	tst	x0, x3				// start cache line aligned?
> +	bic	x0, x0, x3
> +	b.eq	2f
> +	dc	civac, x0			// clean & invalidate D / U line
> +	b	3f
> +2:	dc	ivac, x0			// invalidate D / U line
> +3:	add	x0, x0, x2
>  	cmp	x0, x1
>  	b.lo	1b

The above obviously also needs changing to branch to 3b

>  	dsb	sy
>
Catalin Marinas April 2, 2014, 9:20 a.m. UTC | #2
On Wed, Apr 02, 2014 at 09:52:02AM +0100, Jon Medhurst (Tixy) wrote:
> On Tue, 2014-04-01 at 18:29 +0100, Catalin Marinas wrote:
> > On Tue, Apr 01, 2014 at 05:10:57PM +0100, Jon Medhurst (Tixy) wrote:
> > > On Mon, 2014-03-31 at 18:52 +0100, Catalin Marinas wrote:
> > > > +__dma_inv_range:
> > > > +	dcache_line_size x2, x3
> > > > +	sub	x3, x2, #1
> > > > +	bic	x0, x0, x3
> > > > +	bic	x1, x1, x3
> > > 
> > > Why is the 'end' value in x1 above rounded down to be cache aligned?
> > > This means the cache invalidate won't include the cache line containing
> > > the final bytes of the region, unless it happened to already be cache
> > > line aligned. This looks especially suspect as the other two cache
> > > operations added in the same patch (below) don't do that.
> > 
> > Cache invalidation is destructive, so we want to make sure that it
> > doesn't affect anything beyond x1. But you are right, if either end of
> > the buffer is not cache line aligned it can get it wrong. The fix is to
> > use clean+invalidate on the unaligned ends:
> 
> Like the ARMv7 implementation does :-) However, I wonder, is it possible
> for the Cache Writeback Granule (CWG) to come into play? If the CWG of
> further out caches was bigger than closer (to CPU) caches then it would
> cause data corruption. So for these region ends, should we not be using
> the CWG size, not the minimum D cache line size? On second thoughts,
> that wouldn't be safe either in the converse case where the CWG of a
> closer cache was bigger. So we would need to first use minimum cache
> line size to clean a CWG sized region, then invalidate cache lines by
> the same method.

CWG gives us the maximum size (of all cache levels in the system, even
on a different CPU for example in big.LITTLE configurations) that would
be evicted by the cache operation. So we need small loops of Dmin size
that go over the bigger CWG (and that's guaranteed to be at least Dmin).

> But then that leaves a time period where a write can
> happen between the clean and the invalidate, again leading to data
> corruption. I hope all this means I've either got rather confused or
> that that cache architectures are smart enough to automatically cope. 

You are right. I think having unaligned DMA buffers for inbound
transfers is pointless. We can avoid losing data written by another CPU
in the same cache line but, depending on the stage of the DMA transfer,
it can corrupt the DMA data.

I wonder whether it's easier to define the cache_line_size() macro to
read CWG and assume that the DMA buffers are always aligned, ignoring
the invalidation of the unaligned boundaries. This wouldn't be much
different from your scenario where the shared cache line is written
(just less likely to trigger but still a bug, so I would rather notice
this early).

The ARMv7 code has a similar issue, it performs clean&invalidate on the
unaligned start but it doesn't move r0, so it goes into the main loop
invalidating the same cache line again. If it was written by something
else, the information would be lost.

> I also have a couple of comments on the specific changes below...
> 
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index c46f48b33c14..6a26bf1965d3 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -175,10 +175,17 @@ ENDPROC(__flush_dcache_area)
> >  __dma_inv_range:
> >  	dcache_line_size x2, x3
> >  	sub	x3, x2, #1
> > -	bic	x0, x0, x3
> > +	tst	x1, x3				// end cache line aligned?
> >  	bic	x1, x1, x3
> > -1:	dc	ivac, x0			// invalidate D / U line
> > -	add	x0, x0, x2
> > +	b.eq	1f
> > +	dc	civac, x1			// clean & invalidate D / U line
> 
> That is actually cleaning the address one byte past the end of the
> region, not sure it matters though because it is still within the same
> minimum cache line sized region.

It shouldn't, there is a "bic x1, x1, x3" above and this dc only happens
if the address was unaligned.

> > +1:	tst	x0, x3				// start cache line aligned?
> > +	bic	x0, x0, x3
> > +	b.eq	2f
> > +	dc	civac, x0			// clean & invalidate D / U line
> > +	b	3f
> > +2:	dc	ivac, x0			// invalidate D / U line
> > +3:	add	x0, x0, x2
> >  	cmp	x0, x1
> >  	b.lo	1b
> 
> The above obviously also needs changing to branch to 3b

Good point.

(but I'm no longer convinced we need the hassle above ;))
Russell King - ARM Linux April 2, 2014, 9:40 a.m. UTC | #3
On Wed, Apr 02, 2014 at 10:20:32AM +0100, Catalin Marinas wrote:
> You are right. I think having unaligned DMA buffers for inbound
> transfers is pointless. We can avoid losing data written by another CPU
> in the same cache line but, depending on the stage of the DMA transfer,
> it can corrupt the DMA data.
> 
> I wonder whether it's easier to define the cache_line_size() macro to
> read CWG and assume that the DMA buffers are always aligned, ignoring
> the invalidation of the unaligned boundaries. This wouldn't be much
> different from your scenario where the shared cache line is written
> (just less likely to trigger but still a bug, so I would rather notice
> this early).
> 
> The ARMv7 code has a similar issue, it performs clean&invalidate on the
> unaligned start but it doesn't move r0, so it goes into the main loop
> invalidating the same cache line again. If it was written by something
> else, the information would be lost.

You can't make that a requirement.  People have shared stuff across a
cache line for years in Linux, and people have brought it up and tried
to fix it, but there's much resistance against it.  In particular is
SCSI, which submits the sense buffer as part of a larger structure (the
host.)  SCSI sort-of guarantees that the surrounding struct members
won't be touched, but their data has to be preserved.

In any case, remember that there are strict rules about ownership of the
DMA memory vs calls to the DMA API.  It is invalid to call the DMA
streaming API functions while a DMA transfer is active.
Jon Medhurst (Tixy) April 2, 2014, 10:41 a.m. UTC | #4
On Wed, 2014-04-02 at 10:20 +0100, Catalin Marinas wrote:
> On Wed, Apr 02, 2014 at 09:52:02AM +0100, Jon Medhurst (Tixy) wrote:
> > On Tue, 2014-04-01 at 18:29 +0100, Catalin Marinas wrote:
> > > On Tue, Apr 01, 2014 at 05:10:57PM +0100, Jon Medhurst (Tixy) wrote:
> > > > On Mon, 2014-03-31 at 18:52 +0100, Catalin Marinas wrote:
> > > > > +__dma_inv_range:
> > > > > +	dcache_line_size x2, x3
> > > > > +	sub	x3, x2, #1
> > > > > +	bic	x0, x0, x3
> > > > > +	bic	x1, x1, x3
> > > > 
> > > > Why is the 'end' value in x1 above rounded down to be cache aligned?
> > > > This means the cache invalidate won't include the cache line containing
> > > > the final bytes of the region, unless it happened to already be cache
> > > > line aligned. This looks especially suspect as the other two cache
> > > > operations added in the same patch (below) don't do that.
> > > 
> > > Cache invalidation is destructive, so we want to make sure that it
> > > doesn't affect anything beyond x1. But you are right, if either end of
> > > the buffer is not cache line aligned it can get it wrong. The fix is to
> > > use clean+invalidate on the unaligned ends:
> > 
> > Like the ARMv7 implementation does :-) However, I wonder, is it possible
> > for the Cache Writeback Granule (CWG) to come into play? If the CWG of
> > further out caches was bigger than closer (to CPU) caches then it would
> > cause data corruption. So for these region ends, should we not be using
> > the CWG size, not the minimum D cache line size? On second thoughts,
> > that wouldn't be safe either in the converse case where the CWG of a
> > closer cache was bigger. So we would need to first use minimum cache
> > line size to clean a CWG sized region, then invalidate cache lines by
> > the same method.
> 
> CWG gives us the maximum size (of all cache levels in the system, even
> on a different CPU for example in big.LITTLE configurations) that would
> be evicted by the cache operation. So we need small loops of Dmin size
> that go over the bigger CWG (and that's guaranteed to be at least Dmin).

Yes, that's what I was getting at.

> 
> > But then that leaves a time period where a write can
> > happen between the clean and the invalidate, again leading to data
> > corruption. I hope all this means I've either got rather confused or
> > that that cache architectures are smart enough to automatically cope. 
> 
> You are right. I think having unaligned DMA buffers for inbound
> transfers is pointless. We can avoid losing data written by another CPU
> in the same cache line but, depending on the stage of the DMA transfer,
> it can corrupt the DMA data.
> 
> I wonder whether it's easier to define the cache_line_size() macro to
> read CWG

That won't work, the stride of cache operations needs to be the
_minimum_ cache line size, otherwise we might skip over some cache lines
and not flush them. (We've been hit before by bugs caused by the fact
that big.LITTLE systems report different minimum i-cache line sizes
depend on whether you execute on the big or LITTLE cores [1], we need
the 'real' minimum otherwise things go horribly wrong.)

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2013-February/149950.html

>  and assume that the DMA buffers are always aligned,

We can't assume the region in any particular DMA transfer is cache
aligned, but I agree, that if multiple actors were operating on adjacent
memory locations in the same cache line, without implementing their own
coordination then there's nothing the low level DMA code can do to avoid
data corruption from cache cleaning.

We at least need to make sure that the memory allocation functions used
for DMA buffers return regions of whole CWG size, to avoid unrelated
buffers corrupting each other. If I have correctly read
__dma_alloc_noncoherent and the functions it calls, then it looks like
buffers are actually whole pages, so that's not a problem.

>  ignoring
> the invalidation of the unaligned boundaries. This wouldn't be much
> different from your scenario where the shared cache line is written
> (just less likely to trigger but still a bug, so I would rather notice
> this early).
> 
> The ARMv7 code has a similar issue, it performs clean&invalidate on the
> unaligned start but it doesn't move r0, so it goes into the main loop
> invalidating the same cache line again.

Yes, and as it's missing a dsb could also lead to the wrong behaviour if
the invalidate was reordered to execute prior to the clean+invalidate on
the same line. I just dug into git history to see if I could find a clue
as to how the v7 code came to look like it does, but I see that it's
been like that since the day it was submitted in 2007, by a certain
Catalin Marinas ;-)
Jon Medhurst (Tixy) April 2, 2014, 10:54 a.m. UTC | #5
On Wed, 2014-04-02 at 10:20 +0100, Catalin Marinas wrote:
> On Wed, Apr 02, 2014 at 09:52:02AM +0100, Jon Medhurst (Tixy) wrote:
> > On Tue, 2014-04-01 at 18:29 +0100, Catalin Marinas wrote:

> > > +1:	tst	x0, x3				// start cache line aligned?
> > > +	bic	x0, x0, x3
> > > +	b.eq	2f
> > > +	dc	civac, x0			// clean & invalidate D / U line
> > > +	b	3f
> > > +2:	dc	ivac, x0			// invalidate D / U line
> > > +3:	add	x0, x0, x2
> > >  	cmp	x0, x1
> > >  	b.lo	1b
> > 
> > The above obviously also needs changing to branch to 3b
> 
> Good point.

Actually, it should be 2b :-)
Catalin Marinas April 2, 2014, 11:13 a.m. UTC | #6
On Wed, Apr 02, 2014 at 10:40:45AM +0100, Russell King - ARM Linux wrote:
> On Wed, Apr 02, 2014 at 10:20:32AM +0100, Catalin Marinas wrote:
> > You are right. I think having unaligned DMA buffers for inbound
> > transfers is pointless. We can avoid losing data written by another CPU
> > in the same cache line but, depending on the stage of the DMA transfer,
> > it can corrupt the DMA data.
> > 
> > I wonder whether it's easier to define the cache_line_size() macro to
> > read CWG and assume that the DMA buffers are always aligned, ignoring
> > the invalidation of the unaligned boundaries. This wouldn't be much
> > different from your scenario where the shared cache line is written
> > (just less likely to trigger but still a bug, so I would rather notice
> > this early).
> > 
> > The ARMv7 code has a similar issue, it performs clean&invalidate on the
> > unaligned start but it doesn't move r0, so it goes into the main loop
> > invalidating the same cache line again. If it was written by something
> > else, the information would be lost.
> 
> You can't make that a requirement.  People have shared stuff across a
> cache line for years in Linux, and people have brought it up and tried
> to fix it, but there's much resistance against it.  In particular is
> SCSI, which submits the sense buffer as part of a larger structure (the
> host.)  SCSI sort-of guarantees that the surrounding struct members
> won't be touched, but their data has to be preserved.

Let's hope that CWG stays small enough on real hardware (as the
architecture specifies it to max 2K).

> In any case, remember that there are strict rules about ownership of the
> DMA memory vs calls to the DMA API.  It is invalid to call the DMA
> streaming API functions while a DMA transfer is active.

Yes, I was referring to non-DMA buffer area in the same cache line being
touched during a DMA transfer.
Catalin Marinas April 2, 2014, 11:37 a.m. UTC | #7
On Wed, Apr 02, 2014 at 11:41:30AM +0100, Jon Medhurst (Tixy) wrote:
> On Wed, 2014-04-02 at 10:20 +0100, Catalin Marinas wrote:
> > On Wed, Apr 02, 2014 at 09:52:02AM +0100, Jon Medhurst (Tixy) wrote:
> > > But then that leaves a time period where a write can
> > > happen between the clean and the invalidate, again leading to data
> > > corruption. I hope all this means I've either got rather confused or
> > > that that cache architectures are smart enough to automatically cope. 
> > 
> > You are right. I think having unaligned DMA buffers for inbound
> > transfers is pointless. We can avoid losing data written by another CPU
> > in the same cache line but, depending on the stage of the DMA transfer,
> > it can corrupt the DMA data.
> > 
> > I wonder whether it's easier to define the cache_line_size() macro to
> > read CWG
> 
> That won't work, the stride of cache operations needs to be the
> _minimum_ cache line size, otherwise we might skip over some cache lines
> and not flush them. (We've been hit before by bugs caused by the fact
> that big.LITTLE systems report different minimum i-cache line sizes
> depend on whether you execute on the big or LITTLE cores [1], we need
> the 'real' minimum otherwise things go horribly wrong.)
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2013-February/149950.html

Yes, I remember this. CWG should also be the same in a big.LITTLE
system.

> >  and assume that the DMA buffers are always aligned,
> 
> We can't assume the region in any particular DMA transfer is cache
> aligned, but I agree, that if multiple actors were operating on adjacent
> memory locations in the same cache line, without implementing their own
> coordination then there's nothing the low level DMA code can do to avoid
> data corruption from cache cleaning.
> 
> We at least need to make sure that the memory allocation functions used
> for DMA buffers return regions of whole CWG size, to avoid unrelated
> buffers corrupting each other. If I have correctly read
> __dma_alloc_noncoherent and the functions it calls, then it looks like
> buffers are actually whole pages, so that's not a problem.

It's not about dma_alloc but the streaming DMA API like dma_map_sg().

> >  ignoring
> > the invalidation of the unaligned boundaries. This wouldn't be much
> > different from your scenario where the shared cache line is written
> > (just less likely to trigger but still a bug, so I would rather notice
> > this early).
> > 
> > The ARMv7 code has a similar issue, it performs clean&invalidate on the
> > unaligned start but it doesn't move r0, so it goes into the main loop
> > invalidating the same cache line again.
> 
> Yes, and as it's missing a dsb could also lead to the wrong behaviour if
> the invalidate was reordered to execute prior to the clean+invalidate on
> the same line. I just dug into git history to see if I could find a clue
> as to how the v7 code came to look like it does, but I see that it's
> been like that since the day it was submitted in 2007, by a certain
> Catalin Marinas ;-)

I don't remember ;). But there are some rules about reordering of cache
line operations by MVA with regards to memory accesses. I have to check
whether they apply to other d-cache maintenance to the same address as
well.

I'll try to come up with another patch using CWG.
diff mbox

Patch

diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index c46f48b33c14..6a26bf1965d3 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -175,10 +175,17 @@  ENDPROC(__flush_dcache_area)
 __dma_inv_range:
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
-	bic	x0, x0, x3
+	tst	x1, x3				// end cache line aligned?
 	bic	x1, x1, x3
-1:	dc	ivac, x0			// invalidate D / U line
-	add	x0, x0, x2
+	b.eq	1f
+	dc	civac, x1			// clean & invalidate D / U line
+1:	tst	x0, x3				// start cache line aligned?
+	bic	x0, x0, x3
+	b.eq	2f
+	dc	civac, x0			// clean & invalidate D / U line
+	b	3f
+2:	dc	ivac, x0			// invalidate D / U line
+3:	add	x0, x0, x2
 	cmp	x0, x1
 	b.lo	1b
 	dsb	sy