diff mbox series

SATA broken with LPAE

Message ID 16f065ef-f4ac-46b4-de2a-6b5420ae873a@ti.com
State New
Headers show
Series SATA broken with LPAE | expand

Commit Message

Roger Quadros June 26, 2019, 8:43 a.m. UTC
Hi Christoph / Hans,

SATA has been broken on TI platforms with LPAE, on systems with RAM addresses > 32-bits,
(e.g. DRA7 rev.H+) since v4.18.

The commit at which it breaks is
21e07dba9fb1179148089d611fc9e6e70d1887c3 ("scsi: reduce use of block bounce buffers").

The effect is that the SATA controller starts to see DMA addresses
above 32-bit which it does not support.

Could you please shed some light on how it is supposed to work if
we don't call blk_queue_bounce_limit() for devices that can do only 32-bit DMA
on a system that has addressable RAM above 32-bit Physical?

The below patch fixes it. Is this the right thing to do?

 	}
 
 	blk_queue_max_hw_sectors(q, shost->max_sectors);
-	if (shost->unchecked_isa_dma)
-		blk_queue_bounce_limit(q, BLK_BOUNCE_ISA);
+	blk_queue_bounce_limit(q, scsi_calculate_bounce_limit(shost));
 	blk_queue_segment_boundary(q, shost->dma_boundary);
 	dma_set_seg_boundary(dev, shost->dma_boundary);

cheers,
-roger
-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

Comments

Christoph Hellwig June 26, 2019, 12:53 p.m. UTC | #1
Hi Roger,

it seems the arm dma direct mapping code isn't doing the right thing
here.  On other platforms that have > 4G memory we always use swiotlb
for bounce buffering in case a device that can't DMA to all the memory.

Arm is the odd one out and uses its own dmabounce framework instead,
but it seems like it doesn't get used in this case.  We need to make
sure dmabounce (or swiotlb for that matter) is set up if > 32-bit
addressing is supported.  I'm not really an arm platform expert,
but some of those on the Cc list are and might chime in on how to
do that.
Russell King (Oracle) June 27, 2019, 9:07 a.m. UTC | #2
On Wed, Jun 26, 2019 at 02:53:25PM +0200, Christoph Hellwig wrote:
> Hi Roger,

> 

> it seems the arm dma direct mapping code isn't doing the right thing

> here.  On other platforms that have > 4G memory we always use swiotlb

> for bounce buffering in case a device that can't DMA to all the memory.

> 

> Arm is the odd one out and uses its own dmabounce framework instead,

> but it seems like it doesn't get used in this case.  We need to make

> sure dmabounce (or swiotlb for that matter) is set up if > 32-bit

> addressing is supported.  I'm not really an arm platform expert,

> but some of those on the Cc list are and might chime in on how to

> do that.


dmabounce has only ever been used with specific devices that have weird
setups.  Otherwise, we've never expected what you describe on ARM.  I
also don't recognise your assertion about the way the DMA API should
behave as ever having been documented as a requirement for architectures
to implement.

dmabounce comes with it some serious issues that have been known to
cause OOMs with old kernels (which have never been solved), so that
really isn't the answer.

This kind of breakage that is now being reported is exactly the problem
that I thought would happen with your patches.

32-bit ARM is in maintenance only now, there is no longer any appetite
nor funding for development work on core code.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
Christoph Hellwig June 27, 2019, 9:15 a.m. UTC | #3
On Thu, Jun 27, 2019 at 10:07:53AM +0100, Russell King - ARM Linux admin wrote:
> dmabounce has only ever been used with specific devices that have weird

> setups.  Otherwise, we've never expected what you describe on ARM.  I

> also don't recognise your assertion about the way the DMA API should

> behave as ever having been documented as a requirement for architectures

> to implement.


That requirement has basically always been there since at least the
2.6.x days.  The history here is that when 64-bit architectures showed
up they all had iommus, so this wasn't an issue.  Next was x86 with
highmem, which added special bounce buffering for block I/O and networking
only.  Then ia64 showed up that didn't always have an iommu and swiotlb
was added as a "software IOMMU".  At this point we had to bounce buffering
schemes for block and networking, while everything else potentially
DMAing to higher memory relied on swiotlb, which was picked up by
basically every architecture that could have memory not covered by a
32-bit mask and didn't have an iommu.  Except it seems arm never did
that and has been lucky as people didn't try anything that is not
block or networking on their extended physical address space.
Russell King (Oracle) June 27, 2019, 10:09 a.m. UTC | #4
On Thu, Jun 27, 2019 at 11:15:30AM +0200, Christoph Hellwig wrote:
> On Thu, Jun 27, 2019 at 10:07:53AM +0100, Russell King - ARM Linux admin wrote:

> > dmabounce has only ever been used with specific devices that have weird

> > setups.  Otherwise, we've never expected what you describe on ARM.  I

> > also don't recognise your assertion about the way the DMA API should

> > behave as ever having been documented as a requirement for architectures

> > to implement.

> 

> That requirement has basically always been there since at least the

> 2.6.x days.  The history here is that when 64-bit architectures showed

> up they all had iommus, so this wasn't an issue.  Next was x86 with

> highmem, which added special bounce buffering for block I/O and networking

> only.  Then ia64 showed up that didn't always have an iommu and swiotlb

> was added as a "software IOMMU".  At this point we had to bounce buffering

> schemes for block and networking, while everything else potentially

> DMAing to higher memory relied on swiotlb, which was picked up by

> basically every architecture that could have memory not covered by a

> 32-bit mask and didn't have an iommu.


If it wasn't documented...

> Except it seems arm never did

> that and has been lucky as people didn't try anything that is not

> block or networking on their extended physical address space.


Because no one knew that the requirement had been introduced, and we've
been reliant on all the code you've been stripping out.

You're now causing regressions on 32-bit ARM, so you get to fix it.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
diff mbox series

Patch

diff --git a/drivers/ata/ahci_platform.c b/drivers/ata/ahci_platform.c
index 3aab2e3d57f3..b925dc54cfa5 100644
--- a/drivers/ata/ahci_platform.c
+++ b/drivers/ata/ahci_platform.c
@@ -62,6 +62,9 @@  static int ahci_probe(struct platform_device *pdev)
 	if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci"))
 		hpriv->flags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ;
 
+	if (of_device_is_compatible(dev->of_node, "snps,dwc-ahci"))
+		hpriv->flags |= AHCI_HFLAG_32BIT_ONLY;
+
 	port = acpi_device_get_match_data(dev);
 	if (!port)
 		port = &ahci_port_info;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 65d0a10c76ad..9083c7b89dfc 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1750,6 +1750,21 @@  static int scsi_map_queues(struct blk_mq_tag_set *set)
 	return blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
 }
 
+static u64 scsi_calculate_bounce_limit(struct Scsi_Host *shost)
+{
+	struct device *dma_dev = shost->dma_dev;
+	u64 bounce_limit = 0xffffffff;
+
+	if (shost->unchecked_isa_dma)
+		return BLK_BOUNCE_ISA;
+
+	if (dma_dev && dma_dev->dma_mask)
+		bounce_limit = (u64)dma_max_pfn(dma_dev) << PAGE_SHIFT;
+
+	return bounce_limit;
+}
+
 void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
 {
 	struct device *dev = shost->dma_dev;
@@ -1769,8 +1784,7 @@  void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)