Message ID | 20190812173019.11956-3-julien.grall@arm.com |
---|---|
State | Superseded |
Headers | show |
Series | xen/arm: Rework head.S to make it more compliant with the Arm Arm | expand |
On Mon, 12 Aug 2019, Julien Grall wrote: > The 1:1 mapping may clash with other parts of the Xen virtual memory > layout. At the moment, Xen is handling the clash by only creating a > mapping to the runtime virtual address before enabling the MMU. > > The rest of the mappings (such as the fixmap) will be mapped after the > MMU is enabled. However, the code doing the mapping is not safe as it > replace mapping without using the Break-Before-Make sequence. > > As the 1:1 mapping can be anywhere in the memory, it is easier to remove > all the entries added as soon as the 1:1 mapping is not used rather than > adding the Break-Before-Make sequence everywhere. > > It is difficult to track where exactly the 1:1 mapping was created > without a full rework of create_page_tables(). Instead, introduce a new > function remove_identity_mapping() will look where is the top-level entry > for the 1:1 mapping and remove it. > > The new function is only called for the boot CPU. Secondary CPUs will > switch directly to the runtime page-tables so there are no need to > remove the 1:1 mapping. Note that this still doesn't make the Secondary > CPUs path safe but it is not making it worst. > > Signed-off-by: Julien Grall <julien.grall@arm.com> > > --- > It is very likely we will need to re-introduce the 1:1 mapping to cater > secondary CPUs boot and suspend/resume. For now, the attempt is to make > boot CPU path fully Arm Arm compliant. > > Changes in v3: > - Avoid hardcoding slots > > Changes in v2: > - s/ID map/1:1 mapping/ > - Rename remove_id_map() to remove_identity_mapping() > - Add missing signed-off-by > --- > xen/arch/arm/arm64/head.S | 94 +++++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 79 insertions(+), 15 deletions(-) > > diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S > index 50cff08756..ec138aae3e 100644 > --- a/xen/arch/arm/arm64/head.S > +++ b/xen/arch/arm/arm64/head.S > @@ -33,6 +33,11 @@ > #define PT_DEV 0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */ > #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */ > > +/* Convenience defines to get slot used by Xen mapping. */ > +#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) > +#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) > +#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) > + > #define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2) > > #define __HEAD_FLAG_PHYS_BASE 1 > @@ -301,6 +306,13 @@ real_start_efi: > ldr x0, =primary_switched > br x0 > primary_switched: > + /* > + * The 1:1 map may clash with other parts of the Xen virtual memory > + * layout. As it is not used anymore, remove it completely to > + * avoid having to worry about replacing existing mapping > + * afterwards. > + */ > + bl remove_identity_mapping > bl setup_fixmap > #ifdef CONFIG_EARLY_PRINTK > /* Use a virtual address to access the UART. */ > @@ -626,10 +638,71 @@ enable_mmu: > ret > ENDPROC(enable_mmu) > > +/* > + * Remove the 1:1 map for the page-tables. It is not easy to keep track ^ from > + * where the 1:1 map was mapped, so we will look for the top-level entry > + * exclusive to the 1:1 map and remove it. > + * > + * Inputs: > + * x19: paddr(start) > + * > + * Clobbers x0 - x1 > + */ > +remove_identity_mapping: > + /* > + * Find the zeroeth slot used. Remove the entry from zeroeth > + * table if the slot is not XEN_ZEROETH_SLOT. This part of the comment is good > + * For slot XEN_ZEROETH_SLOT, the 1:1 mapping was either done in first > + * or second table. I don't think this sentence is very useful now that the slot is not hard-coded anymore. I would remove it. Instead, I would write something like "The slot XEN_ZEROETH_SLOT is used for the XEN_VIRT_START mapping." The same goes for all the other levels. > + */ > + lsr x1, x19, #ZEROETH_SHIFT /* x1 := zeroeth slot */ > + cmp x1, #XEN_ZEROETH_SLOT > + beq 1f > + /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ > + ldr x0, =boot_pgtable /* x0 := root table */ > + str xzr, [x0, x1, lsl #3] > + b identity_mapping_removed > + > +1: > + /* > + * Find the first slot used. Remove the entry for the first > + * table if the slot is not XEN_FIRST_SLOT. For slot XEN_FIRST_SLOT, > + * the 1:1 mapping was done in the second table. > + */ > + lsr x1, x19, #FIRST_SHIFT > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > + cmp x1, #XEN_FIRST_SLOT > + beq 1f > + /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ > + ldr x0, =boot_first /* x0 := first table */ > + str xzr, [x0, x1, lsl #3] > + b identity_mapping_removed > + > +1: > + /* > + * Find the second slot used. Remove the entry for the first > + * table if the slot is not XEN_SECOND_SLOT. For slot XEN_SECOND_SLOT, > + * it means the 1:1 mapping was not created. > + */ > + lsr x1, x19, #SECOND_SHIFT > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > + cmp x1, #XEN_SECOND_SLOT > + beq identity_mapping_removed > + /* It is not in slot 1, remove the entry */ > + ldr x0, =boot_second /* x0 := second table */ > + str xzr, [x0, x1, lsl #3] > + > +identity_mapping_removed: > + /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */ > + dsb nshst > + tlbi alle2 > + dsb nsh > + isb I just want to point out that asm-arm/arm64/flushtlb.h says to use: * DSB ISHST // Ensure prior page-tables updates have completed * TLBI... // Invalidate the TLB * DSB ISH // Ensure the TLB invalidation has completed * ISB // See explanation below Also implemented as: asm volatile( \ "dsb ishst;" \ "tlbi " # tlbop ";" \ "dsb ish;" \ "isb;" \ : : : "memory"); \ Why is non-shareable enough? Shouldn't it be inner shareable? > + ret > +ENDPROC(remove_identity_mapping) > + > setup_fixmap: > - /* Now we can install the fixmap and dtb mappings, since we > - * don't need the 1:1 map any more */ > - dsb sy > #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */ > /* Add UART to the fixmap table */ > ldr x1, =xen_fixmap /* x1 := vaddr (xen_fixmap) */ > @@ -647,19 +720,10 @@ setup_fixmap: > ldr x1, =FIXMAP_ADDR(0) > lsr x1, x1, #(SECOND_SHIFT - 3) /* x1 := Slot for FIXMAP(0) */ > str x2, [x4, x1] /* Map it in the fixmap's slot */ > -#endif > > - /* > - * Flush the TLB in case the 1:1 mapping happens to clash with > - * the virtual addresses used by the fixmap or DTB. > - */ > - dsb sy /* Ensure any page table updates made above > - * have occurred. */ > - > - isb > - tlbi alle2 > - dsb sy /* Ensure completion of TLB flush */ > - isb > + /* Ensure any page table updates made above have occurred. */ > + dsb nshst Same here about shareability > +#endif > ret > ENDPROC(setup_fixmap) > > -- > 2.11.0 >
Hi Stefano, On 8/22/19 6:58 PM, Stefano Stabellini wrote: > On Mon, 12 Aug 2019, Julien Grall wrote: >> The 1:1 mapping may clash with other parts of the Xen virtual memory >> layout. At the moment, Xen is handling the clash by only creating a >> mapping to the runtime virtual address before enabling the MMU. >> >> The rest of the mappings (such as the fixmap) will be mapped after the >> MMU is enabled. However, the code doing the mapping is not safe as it >> replace mapping without using the Break-Before-Make sequence. >> >> As the 1:1 mapping can be anywhere in the memory, it is easier to remove >> all the entries added as soon as the 1:1 mapping is not used rather than >> adding the Break-Before-Make sequence everywhere. >> >> It is difficult to track where exactly the 1:1 mapping was created >> without a full rework of create_page_tables(). Instead, introduce a new >> function remove_identity_mapping() will look where is the top-level entry >> for the 1:1 mapping and remove it. >> >> The new function is only called for the boot CPU. Secondary CPUs will >> switch directly to the runtime page-tables so there are no need to >> remove the 1:1 mapping. Note that this still doesn't make the Secondary >> CPUs path safe but it is not making it worst. >> >> Signed-off-by: Julien Grall <julien.grall@arm.com> >> >> --- >> It is very likely we will need to re-introduce the 1:1 mapping to cater >> secondary CPUs boot and suspend/resume. For now, the attempt is to make >> boot CPU path fully Arm Arm compliant. >> >> Changes in v3: >> - Avoid hardcoding slots >> >> Changes in v2: >> - s/ID map/1:1 mapping/ >> - Rename remove_id_map() to remove_identity_mapping() >> - Add missing signed-off-by >> --- >> xen/arch/arm/arm64/head.S | 94 +++++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 79 insertions(+), 15 deletions(-) >> >> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S >> index 50cff08756..ec138aae3e 100644 >> --- a/xen/arch/arm/arm64/head.S >> +++ b/xen/arch/arm/arm64/head.S >> @@ -33,6 +33,11 @@ >> #define PT_DEV 0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */ >> #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */ >> >> +/* Convenience defines to get slot used by Xen mapping. */ >> +#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) >> +#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) >> +#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) >> + >> #define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2) >> >> #define __HEAD_FLAG_PHYS_BASE 1 >> @@ -301,6 +306,13 @@ real_start_efi: >> ldr x0, =primary_switched >> br x0 >> primary_switched: >> + /* >> + * The 1:1 map may clash with other parts of the Xen virtual memory >> + * layout. As it is not used anymore, remove it completely to >> + * avoid having to worry about replacing existing mapping >> + * afterwards. >> + */ >> + bl remove_identity_mapping >> bl setup_fixmap >> #ifdef CONFIG_EARLY_PRINTK >> /* Use a virtual address to access the UART. */ >> @@ -626,10 +638,71 @@ enable_mmu: >> ret >> ENDPROC(enable_mmu) >> >> +/* >> + * Remove the 1:1 map for the page-tables. It is not easy to keep track > ^ from I will fix it in the next version. > >> + * where the 1:1 map was mapped, so we will look for the top-level entry >> + * exclusive to the 1:1 map and remove it. >> + * >> + * Inputs: >> + * x19: paddr(start) >> + * >> + * Clobbers x0 - x1 >> + */ >> +remove_identity_mapping: >> + /* >> + * Find the zeroeth slot used. Remove the entry from zeroeth >> + * table if the slot is not XEN_ZEROETH_SLOT. > > This part of the comment is good > > >> + * For slot XEN_ZEROETH_SLOT, the 1:1 mapping was either done in first >> + * or second table. > > I don't think this sentence is very useful now that the slot is not > hard-coded anymore. I would remove it. Instead, I would write something > like "The slot XEN_ZEROETH_SLOT is used for the XEN_VIRT_START mapping." > The same goes for all the other levels. I think this is a bit confusing because one may think the XEN_VIRT_START mapping is using the full slot. Whereas it is only partially using it. So I would prefer to drop the sentence completely. > > >> + */ >> + lsr x1, x19, #ZEROETH_SHIFT /* x1 := zeroeth slot */ >> + cmp x1, #XEN_ZEROETH_SLOT >> + beq 1f >> + /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ >> + ldr x0, =boot_pgtable /* x0 := root table */ >> + str xzr, [x0, x1, lsl #3] >> + b identity_mapping_removed >> + >> +1: >> + /* >> + * Find the first slot used. Remove the entry for the first >> + * table if the slot is not XEN_FIRST_SLOT. For slot XEN_FIRST_SLOT, >> + * the 1:1 mapping was done in the second table. >> + */ >> + lsr x1, x19, #FIRST_SHIFT >> + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ >> + cmp x1, #XEN_FIRST_SLOT >> + beq 1f >> + /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ >> + ldr x0, =boot_first /* x0 := first table */ >> + str xzr, [x0, x1, lsl #3] >> + b identity_mapping_removed >> + >> +1: >> + /* >> + * Find the second slot used. Remove the entry for the first >> + * table if the slot is not XEN_SECOND_SLOT. For slot XEN_SECOND_SLOT, >> + * it means the 1:1 mapping was not created. >> + */ >> + lsr x1, x19, #SECOND_SHIFT >> + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ >> + cmp x1, #XEN_SECOND_SLOT >> + beq identity_mapping_removed >> + /* It is not in slot 1, remove the entry */ >> + ldr x0, =boot_second /* x0 := second table */ >> + str xzr, [x0, x1, lsl #3] >> + >> +identity_mapping_removed: >> + /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */ >> + dsb nshst >> + tlbi alle2 >> + dsb nsh >> + isb > > I just want to point out that asm-arm/arm64/flushtlb.h says to use: > > * DSB ISHST // Ensure prior page-tables updates have completed > * TLBI... // Invalidate the TLB > * DSB ISH // Ensure the TLB invalidation has completed > * ISB // See explanation below > > Also implemented as: > > asm volatile( \ > "dsb ishst;" \ > "tlbi " # tlbop ";" \ > "dsb ish;" \ > "isb;" \ > : : : "memory"); \ > > Why is non-shareable enough? Shouldn't it be inner shareable? I thought I answered this before. I should have probably clarified in the commit message. nsh is used (rather than ish) because the TLB flush is local (see page D5-230 ARM DDI 0487D.a). For convenience here is the text: "In all cases in this section where a DMB or DSB is referred to, it refers to a DMB or DSB whose required access type is both loads and stores. A DSB NSH is sufficient to ensure completion of TLB maintenance instructions that apply to a single PE. A DSB ISH is sufficient to ensure completion of TLB maintenance instructions that apply to PEs in the same Inner Shareable domain." This is something Linux already does but I wasn't able to find the proper justification in the Arm Arm. So I chose a more conservative approach that is explained in section K11.5.3 (ARM DDI 0487D.a). I have an action to update tlbflush.h before this is in a huge pile of cleanup/optimization. Cheers,
On Thu, 22 Aug 2019, Julien Grall wrote: > Hi Stefano, > > On 8/22/19 6:58 PM, Stefano Stabellini wrote: > > On Mon, 12 Aug 2019, Julien Grall wrote: > > > The 1:1 mapping may clash with other parts of the Xen virtual memory > > > layout. At the moment, Xen is handling the clash by only creating a > > > mapping to the runtime virtual address before enabling the MMU. > > > > > > The rest of the mappings (such as the fixmap) will be mapped after the > > > MMU is enabled. However, the code doing the mapping is not safe as it > > > replace mapping without using the Break-Before-Make sequence. > > > > > > As the 1:1 mapping can be anywhere in the memory, it is easier to remove > > > all the entries added as soon as the 1:1 mapping is not used rather than > > > adding the Break-Before-Make sequence everywhere. > > > > > > It is difficult to track where exactly the 1:1 mapping was created > > > without a full rework of create_page_tables(). Instead, introduce a new > > > function remove_identity_mapping() will look where is the top-level entry > > > for the 1:1 mapping and remove it. > > > > > > The new function is only called for the boot CPU. Secondary CPUs will > > > switch directly to the runtime page-tables so there are no need to > > > remove the 1:1 mapping. Note that this still doesn't make the Secondary > > > CPUs path safe but it is not making it worst. > > > > > > Signed-off-by: Julien Grall <julien.grall@arm.com> > > > > > > --- > > > It is very likely we will need to re-introduce the 1:1 mapping to > > > cater > > > secondary CPUs boot and suspend/resume. For now, the attempt is to > > > make > > > boot CPU path fully Arm Arm compliant. > > > > > > Changes in v3: > > > - Avoid hardcoding slots > > > > > > Changes in v2: > > > - s/ID map/1:1 mapping/ > > > - Rename remove_id_map() to remove_identity_mapping() > > > - Add missing signed-off-by > > > --- > > > xen/arch/arm/arm64/head.S | 94 > > > +++++++++++++++++++++++++++++++++++++++-------- > > > 1 file changed, 79 insertions(+), 15 deletions(-) > > > > > > diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S > > > index 50cff08756..ec138aae3e 100644 > > > --- a/xen/arch/arm/arm64/head.S > > > +++ b/xen/arch/arm/arm64/head.S > > > @@ -33,6 +33,11 @@ > > > #define PT_DEV 0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 > > > */ > > > #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 > > > */ > > > +/* Convenience defines to get slot used by Xen mapping. */ > > > +#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) > > > +#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) > > > +#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) > > > + > > > #define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2) > > > #define __HEAD_FLAG_PHYS_BASE 1 > > > @@ -301,6 +306,13 @@ real_start_efi: > > > ldr x0, =primary_switched > > > br x0 > > > primary_switched: > > > + /* > > > + * The 1:1 map may clash with other parts of the Xen virtual > > > memory > > > + * layout. As it is not used anymore, remove it completely to > > > + * avoid having to worry about replacing existing mapping > > > + * afterwards. > > > + */ > > > + bl remove_identity_mapping > > > bl setup_fixmap > > > #ifdef CONFIG_EARLY_PRINTK > > > /* Use a virtual address to access the UART. */ > > > @@ -626,10 +638,71 @@ enable_mmu: > > > ret > > > ENDPROC(enable_mmu) > > > +/* > > > + * Remove the 1:1 map for the page-tables. It is not easy to keep track > > ^ from > > I will fix it in the next version. > > > > > > + * where the 1:1 map was mapped, so we will look for the top-level entry > > > + * exclusive to the 1:1 map and remove it. > > > + * > > > + * Inputs: > > > + * x19: paddr(start) > > > + * > > > + * Clobbers x0 - x1 > > > + */ > > > +remove_identity_mapping: > > > + /* > > > + * Find the zeroeth slot used. Remove the entry from zeroeth > > > + * table if the slot is not XEN_ZEROETH_SLOT. > > > > This part of the comment is good > > > > > > > + * For slot XEN_ZEROETH_SLOT, the 1:1 mapping was either done in > > > first > > > + * or second table. > > > > I don't think this sentence is very useful now that the slot is not > > hard-coded anymore. I would remove it. Instead, I would write something > > like "The slot XEN_ZEROETH_SLOT is used for the XEN_VIRT_START mapping." > > The same goes for all the other levels. > > I think this is a bit confusing because one may think the XEN_VIRT_START > mapping is using the full slot. Whereas it is only partially using it. > > So I would prefer to drop the sentence completely. That is also OK > > > + */ > > > + lsr x1, x19, #ZEROETH_SHIFT /* x1 := zeroeth slot */ > > > + cmp x1, #XEN_ZEROETH_SLOT > > > + beq 1f > > > + /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ > > > + ldr x0, =boot_pgtable /* x0 := root table */ > > > + str xzr, [x0, x1, lsl #3] > > > + b identity_mapping_removed > > > + > > > +1: > > > + /* > > > + * Find the first slot used. Remove the entry for the first > > > + * table if the slot is not XEN_FIRST_SLOT. For slot > > > XEN_FIRST_SLOT, > > > + * the 1:1 mapping was done in the second table. > > > + */ > > > + lsr x1, x19, #FIRST_SHIFT > > > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > > > + cmp x1, #XEN_FIRST_SLOT > > > + beq 1f > > > + /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ > > > + ldr x0, =boot_first /* x0 := first table */ > > > + str xzr, [x0, x1, lsl #3] > > > + b identity_mapping_removed > > > + > > > +1: > > > + /* > > > + * Find the second slot used. Remove the entry for the first > > > + * table if the slot is not XEN_SECOND_SLOT. For slot > > > XEN_SECOND_SLOT, > > > + * it means the 1:1 mapping was not created. > > > + */ > > > + lsr x1, x19, #SECOND_SHIFT > > > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > > > + cmp x1, #XEN_SECOND_SLOT > > > + beq identity_mapping_removed > > > + /* It is not in slot 1, remove the entry */ > > > + ldr x0, =boot_second /* x0 := second table */ > > > + str xzr, [x0, x1, lsl #3] > > > + > > > +identity_mapping_removed: > > > + /* See asm-arm/arm64/flushtlb.h for the explanation of the > > > sequence. */ > > > + dsb nshst > > > + tlbi alle2 > > > + dsb nsh > > > + isb > > > > I just want to point out that asm-arm/arm64/flushtlb.h says to use: > > > > * DSB ISHST // Ensure prior page-tables updates have completed > > * TLBI... // Invalidate the TLB > > * DSB ISH // Ensure the TLB invalidation has completed > > * ISB // See explanation below > > > > Also implemented as: > > > > asm volatile( \ > > "dsb ishst;" \ > > "tlbi " # tlbop ";" \ > > "dsb ish;" \ > > "isb;" \ > > : : : "memory"); \ > > > > Why is non-shareable enough? Shouldn't it be inner shareable? > > I thought I answered this before. I should have probably clarified in the > commit message. I had the feeling you already answered but couldn't find the reference. Yes please add something to the commit message or an in-code comment, especially given that tlbflush.h is not updated yet. > nsh is used (rather than ish) because the TLB flush is local (see page D5-230 > ARM DDI 0487D.a). For convenience here is the text: > > "In all cases in this section where a DMB or DSB is referred to, it > refers to a DMB or DSB whose required access type is > both loads and stores. A DSB NSH is sufficient to ensure completion of > TLB maintenance instructions that apply to a > single PE. A DSB ISH is sufficient to ensure completion of TLB > maintenance instructions that apply to PEs in the > same Inner Shareable domain." > > This is something Linux already does but I wasn't able to find the proper > justification in the Arm Arm. So I chose a more conservative approach that is > explained in section K11.5.3 (ARM DDI 0487D.a). > > I have an action to update tlbflush.h before this is in a huge pile of > cleanup/optimization.
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index 50cff08756..ec138aae3e 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -33,6 +33,11 @@ #define PT_DEV 0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */ #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */ +/* Convenience defines to get slot used by Xen mapping. */ +#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) +#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) +#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) + #define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2) #define __HEAD_FLAG_PHYS_BASE 1 @@ -301,6 +306,13 @@ real_start_efi: ldr x0, =primary_switched br x0 primary_switched: + /* + * The 1:1 map may clash with other parts of the Xen virtual memory + * layout. As it is not used anymore, remove it completely to + * avoid having to worry about replacing existing mapping + * afterwards. + */ + bl remove_identity_mapping bl setup_fixmap #ifdef CONFIG_EARLY_PRINTK /* Use a virtual address to access the UART. */ @@ -626,10 +638,71 @@ enable_mmu: ret ENDPROC(enable_mmu) +/* + * Remove the 1:1 map for the page-tables. It is not easy to keep track + * where the 1:1 map was mapped, so we will look for the top-level entry + * exclusive to the 1:1 map and remove it. + * + * Inputs: + * x19: paddr(start) + * + * Clobbers x0 - x1 + */ +remove_identity_mapping: + /* + * Find the zeroeth slot used. Remove the entry from zeroeth + * table if the slot is not XEN_ZEROETH_SLOT. For slot + * XEN_ZEROETH_SLOT, the 1:1 mapping was either done in first + * or second table. + */ + lsr x1, x19, #ZEROETH_SHIFT /* x1 := zeroeth slot */ + cmp x1, #XEN_ZEROETH_SLOT + beq 1f + /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ + ldr x0, =boot_pgtable /* x0 := root table */ + str xzr, [x0, x1, lsl #3] + b identity_mapping_removed + +1: + /* + * Find the first slot used. Remove the entry for the first + * table if the slot is not XEN_FIRST_SLOT. For slot XEN_FIRST_SLOT, + * the 1:1 mapping was done in the second table. + */ + lsr x1, x19, #FIRST_SHIFT + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ + cmp x1, #XEN_FIRST_SLOT + beq 1f + /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ + ldr x0, =boot_first /* x0 := first table */ + str xzr, [x0, x1, lsl #3] + b identity_mapping_removed + +1: + /* + * Find the second slot used. Remove the entry for the first + * table if the slot is not XEN_SECOND_SLOT. For slot XEN_SECOND_SLOT, + * it means the 1:1 mapping was not created. + */ + lsr x1, x19, #SECOND_SHIFT + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ + cmp x1, #XEN_SECOND_SLOT + beq identity_mapping_removed + /* It is not in slot 1, remove the entry */ + ldr x0, =boot_second /* x0 := second table */ + str xzr, [x0, x1, lsl #3] + +identity_mapping_removed: + /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */ + dsb nshst + tlbi alle2 + dsb nsh + isb + + ret +ENDPROC(remove_identity_mapping) + setup_fixmap: - /* Now we can install the fixmap and dtb mappings, since we - * don't need the 1:1 map any more */ - dsb sy #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */ /* Add UART to the fixmap table */ ldr x1, =xen_fixmap /* x1 := vaddr (xen_fixmap) */ @@ -647,19 +720,10 @@ setup_fixmap: ldr x1, =FIXMAP_ADDR(0) lsr x1, x1, #(SECOND_SHIFT - 3) /* x1 := Slot for FIXMAP(0) */ str x2, [x4, x1] /* Map it in the fixmap's slot */ -#endif - /* - * Flush the TLB in case the 1:1 mapping happens to clash with - * the virtual addresses used by the fixmap or DTB. - */ - dsb sy /* Ensure any page table updates made above - * have occurred. */ - - isb - tlbi alle2 - dsb sy /* Ensure completion of TLB flush */ - isb + /* Ensure any page table updates made above have occurred. */ + dsb nshst +#endif ret ENDPROC(setup_fixmap)
The 1:1 mapping may clash with other parts of the Xen virtual memory layout. At the moment, Xen is handling the clash by only creating a mapping to the runtime virtual address before enabling the MMU. The rest of the mappings (such as the fixmap) will be mapped after the MMU is enabled. However, the code doing the mapping is not safe as it replace mapping without using the Break-Before-Make sequence. As the 1:1 mapping can be anywhere in the memory, it is easier to remove all the entries added as soon as the 1:1 mapping is not used rather than adding the Break-Before-Make sequence everywhere. It is difficult to track where exactly the 1:1 mapping was created without a full rework of create_page_tables(). Instead, introduce a new function remove_identity_mapping() will look where is the top-level entry for the 1:1 mapping and remove it. The new function is only called for the boot CPU. Secondary CPUs will switch directly to the runtime page-tables so there are no need to remove the 1:1 mapping. Note that this still doesn't make the Secondary CPUs path safe but it is not making it worst. Signed-off-by: Julien Grall <julien.grall@arm.com> --- It is very likely we will need to re-introduce the 1:1 mapping to cater secondary CPUs boot and suspend/resume. For now, the attempt is to make boot CPU path fully Arm Arm compliant. Changes in v3: - Avoid hardcoding slots Changes in v2: - s/ID map/1:1 mapping/ - Rename remove_id_map() to remove_identity_mapping() - Add missing signed-off-by --- xen/arch/arm/arm64/head.S | 94 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 79 insertions(+), 15 deletions(-)