diff mbox series

[edk2] ArmPkg/ArmMmuLib ARM: remove cache maintenance of block mapping contents

Message ID 20180620191007.1250-1-ard.biesheuvel@linaro.org
State New
Headers show
Series [edk2] ArmPkg/ArmMmuLib ARM: remove cache maintenance of block mapping contents | expand

Commit Message

Ard Biesheuvel June 20, 2018, 7:10 p.m. UTC
Peculiarly enough, the current page table manipulation code takes it
upon itself to write back and invalidate the memory contents covered
by section mappings when their memory attributes change. It is not
generally the case that data must be written back when such a change
occurs, even when switching from cacheable to non-cacheable attributes,
and in some cases, it is actually causing problems. (The cache
maintenance is also performed on the PCIe MMIO regions as they get
mapped by the PCI bus driver, and under virtualization, each cache
maintenance operation on an emulated MMIO region triggers a round
trip to the host and back)

So let's just drop this code.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 6 ------
 1 file changed, 6 deletions(-)

-- 
2.17.1

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Comments

Leif Lindholm June 20, 2018, 10:05 p.m. UTC | #1
On Wed, Jun 20, 2018 at 09:10:07PM +0200, Ard Biesheuvel wrote:
> Peculiarly enough, the current page table manipulation code takes it

> upon itself to write back and invalidate the memory contents covered

> by section mappings when their memory attributes change. It is not

> generally the case that data must be written back when such a change

> occurs, even when switching from cacheable to non-cacheable attributes,

> and in some cases, it is actually causing problems. (The cache

> maintenance is also performed on the PCIe MMIO regions as they get

> mapped by the PCI bus driver, and under virtualization, each cache

> maintenance operation on an emulated MMIO region triggers a round

> trip to the host and back)

> 

> So let's just drop this code.


I guess this is a remnant from pre-ARMv7 days, when translation table
walks weren't inner-cacheable. I think we've already determined that
ARMv7-A+ is required for PI compliance anyway, so I guess dropping
this should be fine.

But if so, don't we also want to get rid of the other two instances of
WriteBackInvalidateDataCacheRange() in this file?

/
    Leif

> Contributed-under: TianoCore Contribution Agreement 1.1

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 6 ------

>  1 file changed, 6 deletions(-)

> 

> diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> index 9bf4ba03fd5b..d1bca4c601b8 100644

> --- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> +++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> @@ -718,12 +718,6 @@ UpdateSectionEntries (

>        if (CurrentDescriptor  != Descriptor) {

>          Mva = (VOID *)(UINTN)(((UINTN)FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT);

>  

> -        // Clean/invalidate the cache for this section, but only

> -        // if we are modifying the memory type attributes

> -        if (((CurrentDescriptor ^ Descriptor) & TT_DESCRIPTOR_SECTION_CACHE_POLICY_MASK) != 0) {

> -          WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB);

> -        }

> -

>          // Only need to update if we are changing the descriptor

>          FirstLevelTable[FirstLevelIdx + i] = Descriptor;

>          ArmUpdateTranslationTableEntry ((VOID *)&FirstLevelTable[FirstLevelIdx + i], Mva);

> -- 

> 2.17.1

> 

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Ard Biesheuvel June 21, 2018, 6:21 a.m. UTC | #2
On 21 June 2018 at 00:05, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> On Wed, Jun 20, 2018 at 09:10:07PM +0200, Ard Biesheuvel wrote:

>> Peculiarly enough, the current page table manipulation code takes it

>> upon itself to write back and invalidate the memory contents covered

>> by section mappings when their memory attributes change. It is not

>> generally the case that data must be written back when such a change

>> occurs, even when switching from cacheable to non-cacheable attributes,

>> and in some cases, it is actually causing problems. (The cache

>> maintenance is also performed on the PCIe MMIO regions as they get

>> mapped by the PCI bus driver, and under virtualization, each cache

>> maintenance operation on an emulated MMIO region triggers a round

>> trip to the host and back)

>>

>> So let's just drop this code.

>

> I guess this is a remnant from pre-ARMv7 days, when translation table

> walks weren't inner-cacheable. I think we've already determined that

> ARMv7-A+ is required for PI compliance anyway, so I guess dropping

> this should be fine.

>


No, this one is different. This does not flush the page containing the
page table entry that is modified, but it flushes the memory that the
page table entry maps (a 1 MB section in this case)

> But if so, don't we also want to get rid of the other two instances of

> WriteBackInvalidateDataCacheRange() in this file?

>


If we assume page table walks are coherent, then yes, we could remove
some more code, but that should be a separate patch.

>> Contributed-under: TianoCore Contribution Agreement 1.1

>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> ---

>>  ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 6 ------

>>  1 file changed, 6 deletions(-)

>>

>> diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

>> index 9bf4ba03fd5b..d1bca4c601b8 100644

>> --- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

>> +++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

>> @@ -718,12 +718,6 @@ UpdateSectionEntries (

>>        if (CurrentDescriptor  != Descriptor) {

>>          Mva = (VOID *)(UINTN)(((UINTN)FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT);

>>

>> -        // Clean/invalidate the cache for this section, but only

>> -        // if we are modifying the memory type attributes

>> -        if (((CurrentDescriptor ^ Descriptor) & TT_DESCRIPTOR_SECTION_CACHE_POLICY_MASK) != 0) {

>> -          WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB);

>> -        }

>> -

>>          // Only need to update if we are changing the descriptor

>>          FirstLevelTable[FirstLevelIdx + i] = Descriptor;

>>          ArmUpdateTranslationTableEntry ((VOID *)&FirstLevelTable[FirstLevelIdx + i], Mva);

>> --

>> 2.17.1

>>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Leif Lindholm June 21, 2018, 10:48 a.m. UTC | #3
Replying to this one rather than the new patches, to keep the thread going.

On Thu, Jun 21, 2018 at 08:21:22AM +0200, Ard Biesheuvel wrote:
> On 21 June 2018 at 00:05, Leif Lindholm <leif.lindholm@linaro.org> wrote:

> > On Wed, Jun 20, 2018 at 09:10:07PM +0200, Ard Biesheuvel wrote:

> >> Peculiarly enough, the current page table manipulation code takes it

> >> upon itself to write back and invalidate the memory contents covered

> >> by section mappings when their memory attributes change. It is not

> >> generally the case that data must be written back when such a change

> >> occurs, even when switching from cacheable to non-cacheable attributes,

> >> and in some cases, it is actually causing problems. (The cache

> >> maintenance is also performed on the PCIe MMIO regions as they get

> >> mapped by the PCI bus driver, and under virtualization, each cache

> >> maintenance operation on an emulated MMIO region triggers a round

> >> trip to the host and back)

> >>

> >> So let's just drop this code.

> >

> > I guess this is a remnant from pre-ARMv7 days, when translation table

> > walks weren't inner-cacheable. I think we've already determined that

> > ARMv7-A+ is required for PI compliance anyway, so I guess dropping

> > this should be fine.

> 

> No, this one is different. This does not flush the page containing the

> page table entry that is modified, but it flushes the memory that the

> page table entry maps (a 1 MB section in this case)


*scratches head*

Sounds like someone was trying to be overly helpful by completely
hiding architectural complexity. Surely if someone is remapping a
region to different cacheability settings, the responsibility needs to
be on them to manually clean/invalidate?

So yeah, that needs to go.

> > But if so, don't we also want to get rid of the other two instances of

> > WriteBackInvalidateDataCacheRange() in this file?

> 

> If we assume page table walks are coherent, then yes, we could remove

> some more code, but that should be a separate patch.


Hmm...
I'm starting to have second thoughts about that.
UEFI explicitly says "The binding does not mandate whether page tables
are cached or un-cached.".

Otoh, we know that we only practically support implementations that
are required to be able to do coherent page table walks. So maybe we
can nuke it and reinstate it under a Pcd (default off) if anyone
screams?

/
    Leif

> >> Contributed-under: TianoCore Contribution Agreement 1.1

> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> >> ---

> >>  ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 6 ------

> >>  1 file changed, 6 deletions(-)

> >>

> >> diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> >> index 9bf4ba03fd5b..d1bca4c601b8 100644

> >> --- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> >> +++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c

> >> @@ -718,12 +718,6 @@ UpdateSectionEntries (

> >>        if (CurrentDescriptor  != Descriptor) {

> >>          Mva = (VOID *)(UINTN)(((UINTN)FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT);

> >>

> >> -        // Clean/invalidate the cache for this section, but only

> >> -        // if we are modifying the memory type attributes

> >> -        if (((CurrentDescriptor ^ Descriptor) & TT_DESCRIPTOR_SECTION_CACHE_POLICY_MASK) != 0) {

> >> -          WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB);

> >> -        }

> >> -

> >>          // Only need to update if we are changing the descriptor

> >>          FirstLevelTable[FirstLevelIdx + i] = Descriptor;

> >>          ArmUpdateTranslationTableEntry ((VOID *)&FirstLevelTable[FirstLevelIdx + i], Mva);

> >> --

> >> 2.17.1

> >>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
diff mbox series

Patch

diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
index 9bf4ba03fd5b..d1bca4c601b8 100644
--- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
@@ -718,12 +718,6 @@  UpdateSectionEntries (
       if (CurrentDescriptor  != Descriptor) {
         Mva = (VOID *)(UINTN)(((UINTN)FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT);
 
-        // Clean/invalidate the cache for this section, but only
-        // if we are modifying the memory type attributes
-        if (((CurrentDescriptor ^ Descriptor) & TT_DESCRIPTOR_SECTION_CACHE_POLICY_MASK) != 0) {
-          WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB);
-        }
-
         // Only need to update if we are changing the descriptor
         FirstLevelTable[FirstLevelIdx + i] = Descriptor;
         ArmUpdateTranslationTableEntry ((VOID *)&FirstLevelTable[FirstLevelIdx + i], Mva);