diff mbox series

[v5,3/6] arm: provide a function for boards init code to modify MMU virtual-physical map

Message ID 20200603124345.18595-4-m.szyprowski@samsung.com
State Accepted
Commit d877f8fd0f0973451f5c48d37c5d7fb761075765
Headers show
Series [v5,1/6] powerpc: move ADDR_MAP to Kconfig | expand

Commit Message

Marek Szyprowski June 3, 2020, 12:43 p.m. UTC
Provide function for setting arbitrary virtual-physical MMU mapping
and cache settings for the given region.

Signed-off-by: Marek Szyprowski <m.szyprowski at samsung.com>
---
 arch/arm/include/asm/mmu.h    |  8 ++++++++
 arch/arm/include/asm/system.h | 13 +++++++++++++
 arch/arm/lib/cache-cp15.c     | 24 ++++++++++++++++++------
 3 files changed, 39 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm/include/asm/mmu.h

Comments

Tom Rini June 3, 2020, 1:55 p.m. UTC | #1
On Wed, Jun 03, 2020 at 02:43:42PM +0200, Marek Szyprowski wrote:

> Provide function for setting arbitrary virtual-physical MMU mapping
> and cache settings for the given region.
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski at samsung.com>

Reviewed-by: Tom Rini <trini at konsulko.com>
Patrick Delaunay July 23, 2020, 4:32 p.m. UTC | #2
Hi Marek,

> From: U-Boot <u-boot-bounces@lists.denx.de> On Behalf Of Tom Rini

> Sent: vendredi 10 juillet 2020 22:22

> 

> On Wed, Jun 03, 2020 at 02:43:42PM +0200, Marek Szyprowski wrote:

> 

> > Provide function for setting arbitrary virtual-physical MMU mapping

> > and cache settings for the given region.

> >

> > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

> > Reviewed-by: Tom Rini <trini@konsulko.com>

> 

> Applied to u-boot/master, thanks!


For information, this patch break the SPL boot on stm32mp1 platform in master branch.

It is linked to commit 7e8471cae5c6614c54b9cfae2746d7299bd47a0c
("arm: stm32mp: activate data cache in SPL and before relocation")

For the lines:

+		mmu_set_region_dcache_behaviour(STM32_SYSRAM_BASE,
+						STM32_SYSRAM_SIZE,
+						DCACHE_DEFAULT_OPTION);

In this patch, the STM32MP15x activate the DACACHE on SYSRAM with:

#define STM32_SYSRAM_BASE		0x2FFC0000
#define STM32_SYSRAM_SIZE		SZ_256K

Even is this address is not MMU_SECTION_SIZE aligned, the configuration of the MMU
section was correctly aligned in set_section_dcache

-	value |= ((u32)section << MMU_SECTION_SHIFT); 

With caller :

void mmu_set_region_dcache_behaviour (
.....
	/* div by 2 before start + size to avoid phys_addr_t overflow */
	end = ALIGN((start / 2) + (size / 2), MMU_SECTION_SIZE / 2)
	      >> (MMU_SECTION_SHIFT - 1);

	start = start >> MMU_SECTION_SHIFT;
....
-	for (upto = start; upto < end; upto++)
-		set_section_dcache(upto, option);


But today it it no more the case when the start address is not MMU_SIZE aligned.

void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
				     enum dcache_option option)
{
	mmu_set_region_dcache_behaviour_phys(start, start, size, option);
}

'start' parameter  is directly used in 'phys' address in the next function:

void mmu_set_region_dcache_behaviour_phys(phys_addr_t start, phys_addr_t phys,
					size_t size, enum dcache_option option)
{
.....
	for (upto = start; upto < end; upto++, phys += MMU_SECTION_SIZE)
		set_section_phys(upto, phys, option);


and then

static void set_section_phys(int section, phys_addr_t phys,
			     enum dcache_option option)
{
....

	/* Add the page offset */
	value |= phys;


So today I can solve my issue, if I align the section in my code: 

 	if (IS_ENABLED(CONFIG_SPL_BUILD))
-		mmu_set_region_dcache_behaviour(STM32_SYSRAM_BASE,
-						STM32_SYSRAM_SIZE,
-						DCACHE_DEFAULT_OPTION);
+		mmu_set_region_dcache_behaviour(
+			ALIGN(STM32_SYSRAM_BASE, MMU_SECTION_SIZE),
+			round_up(STM32_SYSRAM_SIZE, MMU_SECTION_SIZE),
+			DCACHE_DEFAULT_OPTION);
 	else

But I just concerned that this lib behavior is really unexpected: need to provided MMU_SIZE aligned
start address when mmu_set_region_dcache_behaviour is called.

Do you think we need to restore the previous behavior of the cp15 function mmu_set_region_dcache_behaviour() ?
=> internally aligned DACHE region on MMU_SECTION_SIZE when parameters, start or size, are not aligned.

See also explanation for other use case, and start / end / size usage
http://patchwork.ozlabs.org/project/uboot/patch/20200424201957.v2.3.Ic2c7c6923035711a4c653d52ae7c0f57ca6f9210@changeid/


> --

> Tom


Regards
Patrick
Marek Szyprowski Aug. 3, 2020, 12:32 p.m. UTC | #3
Hi Patrick,

On 23.07.2020 18:32, Patrick DELAUNAY wrote:
> Hi Marek,

>

>> From: U-Boot <u-boot-bounces@lists.denx.de> On Behalf Of Tom Rini

>> Sent: vendredi 10 juillet 2020 22:22

>>

>> On Wed, Jun 03, 2020 at 02:43:42PM +0200, Marek Szyprowski wrote:

>>

>>> Provide function for setting arbitrary virtual-physical MMU mapping

>>> and cache settings for the given region.

>>>

>>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

>>> Reviewed-by: Tom Rini <trini@konsulko.com>

>> Applied to u-boot/master, thanks!

> For information, this patch break the SPL boot on stm32mp1 platform in master branch.

>

> It is linked to commit 7e8471cae5c6614c54b9cfae2746d7299bd47a0c

> ("arm: stm32mp: activate data cache in SPL and before relocation")

>

> For the lines:

>

> +		mmu_set_region_dcache_behaviour(STM32_SYSRAM_BASE,

> +						STM32_SYSRAM_SIZE,

> +						DCACHE_DEFAULT_OPTION);

>

> In this patch, the STM32MP15x activate the DACACHE on SYSRAM with:

>

> #define STM32_SYSRAM_BASE		0x2FFC0000

> #define STM32_SYSRAM_SIZE		SZ_256K

>

> Even is this address is not MMU_SECTION_SIZE aligned, the configuration of the MMU

> section was correctly aligned in set_section_dcache

>

> -	value |= ((u32)section << MMU_SECTION_SHIFT);

>

> With caller :

>

> void mmu_set_region_dcache_behaviour (

> .....

> 	/* div by 2 before start + size to avoid phys_addr_t overflow */

> 	end = ALIGN((start / 2) + (size / 2), MMU_SECTION_SIZE / 2)

> 	      >> (MMU_SECTION_SHIFT - 1);

> 	start = start >> MMU_SECTION_SHIFT;

> ....

> -	for (upto = start; upto < end; upto++)

> -		set_section_dcache(upto, option);

>

>

> But today it it no more the case when the start address is not MMU_SIZE aligned.

>

> void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,

> 				     enum dcache_option option)

> {

> 	mmu_set_region_dcache_behaviour_phys(start, start, size, option);

> }

>

> 'start' parameter  is directly used in 'phys' address in the next function:

>

> void mmu_set_region_dcache_behaviour_phys(phys_addr_t start, phys_addr_t phys,

> 					size_t size, enum dcache_option option)

> {

> .....

> 	for (upto = start; upto < end; upto++, phys += MMU_SECTION_SIZE)

> 		set_section_phys(upto, phys, option);

>

>

> and then

>

> static void set_section_phys(int section, phys_addr_t phys,

> 			     enum dcache_option option)

> {

> ....

>

> 	/* Add the page offset */

> 	value |= phys;

>

>

> So today I can solve my issue, if I align the section in my code:

>

>   	if (IS_ENABLED(CONFIG_SPL_BUILD))

> -		mmu_set_region_dcache_behaviour(STM32_SYSRAM_BASE,

> -						STM32_SYSRAM_SIZE,

> -						DCACHE_DEFAULT_OPTION);

> +		mmu_set_region_dcache_behaviour(

> +			ALIGN(STM32_SYSRAM_BASE, MMU_SECTION_SIZE),

> +			round_up(STM32_SYSRAM_SIZE, MMU_SECTION_SIZE),

> +			DCACHE_DEFAULT_OPTION);

>   	else

>

> But I just concerned that this lib behavior is really unexpected: need to provided MMU_SIZE aligned

> start address when mmu_set_region_dcache_behaviour is called.

>

> Do you think we need to restore the previous behavior of the cp15 function mmu_set_region_dcache_behaviour() ?

> => internally aligned DACHE region on MMU_SECTION_SIZE when parameters, start or size, are not aligned.

>

> See also explanation for other use case, and start / end / size usage

> http://patchwork.ozlabs.org/project/uboot/patch/20200424201957.v2.3.Ic2c7c6923035711a4c653d52ae7c0f57ca6f9210@changeid/


I'm sorry for breaking your setup. However IMHO the caller should ensure 
that the parameters are correctly aligned. The old behavior modified 
cache parameters on the larger area of the memory than the called 
wanted, what might lead to some side-effects. I would prefer aligning 
parameters in the callers and maybe add some comment to the function 
description that it assumes that the parameters are properly aligned. 
Adding a check for the proper alignment is a bit useless, because 
usually this function is called so early, that no message can be printed.

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland
diff mbox series

Patch

diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
new file mode 100644
index 0000000..9ac16f5
--- /dev/null
+++ b/arch/arm/include/asm/mmu.h
@@ -0,0 +1,8 @@ 
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef __ASM_ARM_MMU_H
+#define __ASM_ARM_MMU_H
+
+void init_addr_map(void);
+
+#endif
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 63649ed..5d3b6d0 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -585,6 +585,19 @@  s32 psci_features(u32 function_id, u32 psci_fid);
 void save_boot_params_ret(void);
 
 /**
+ * mmu_set_region_dcache_behaviour_phys() - set virt/phys mapping
+ *
+ * Change the virt/phys mapping and cache settings for a region.
+ *
+ * @virt:	virtual start address of memory region to change
+ * @phys:	physical address for the memory region to set
+ * @size:	size of memory region to change
+ * @option:	dcache option to select
+ */
+void mmu_set_region_dcache_behaviour_phys(phys_addr_t virt, phys_addr_t phys,
+					size_t size, enum dcache_option option);
+
+/**
  * mmu_set_region_dcache_behaviour() - set cache settings
  *
  * Change the cache settings for a region.
diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c
index 1da2e92..3971761 100644
--- a/arch/arm/lib/cache-cp15.c
+++ b/arch/arm/lib/cache-cp15.c
@@ -25,7 +25,8 @@  __weak void arm_init_domains(void)
 {
 }
 
-void set_section_dcache(int section, enum dcache_option option)
+static void set_section_phys(int section, phys_addr_t phys,
+			     enum dcache_option option)
 {
 #ifdef CONFIG_ARMV7_LPAE
 	u64 *page_table = (u64 *)gd->arch.tlb_addr;
@@ -37,7 +38,7 @@  void set_section_dcache(int section, enum dcache_option option)
 #endif
 
 	/* Add the page offset */
-	value |= ((u32)section << MMU_SECTION_SHIFT);
+	value |= phys;
 
 	/* Add caching bits */
 	value |= option;
@@ -46,13 +47,18 @@  void set_section_dcache(int section, enum dcache_option option)
 	page_table[section] = value;
 }
 
+void set_section_dcache(int section, enum dcache_option option)
+{
+	set_section_phys(section, (u32)section << MMU_SECTION_SHIFT, option);
+}
+
 __weak void mmu_page_table_flush(unsigned long start, unsigned long stop)
 {
 	debug("%s: Warning: not implemented\n", __func__);
 }
 
-void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
-				     enum dcache_option option)
+void mmu_set_region_dcache_behaviour_phys(phys_addr_t start, phys_addr_t phys,
+					size_t size, enum dcache_option option)
 {
 #ifdef CONFIG_ARMV7_LPAE
 	u64 *page_table = (u64 *)gd->arch.tlb_addr;
@@ -74,8 +80,8 @@  void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
 	debug("%s: start=%pa, size=%zu, option=0x%x\n", __func__, &start, size,
 	      option);
 #endif
-	for (upto = start; upto < end; upto++)
-		set_section_dcache(upto, option);
+	for (upto = start; upto < end; upto++, phys += MMU_SECTION_SIZE)
+		set_section_phys(upto, phys, option);
 
 	/*
 	 * Make sure range is cache line aligned
@@ -90,6 +96,12 @@  void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
 	mmu_page_table_flush(startpt, stoppt);
 }
 
+void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
+				     enum dcache_option option)
+{
+	mmu_set_region_dcache_behaviour_phys(start, start, size, option);
+}
+
 __weak void dram_bank_mmu_setup(int bank)
 {
 	bd_t *bd = gd->bd;