diff mbox series

[Xen-devel,14/17] xen/arm64: head: Remove ID map as soon as it is not used

Message ID 20190610193215.23704-15-julien.grall@arm.com
State Superseded
Headers show
Series xen/arm64: Rework head.S to make it more compliant with the Arm Arm | expand

Commit Message

Julien Grall June 10, 2019, 7:32 p.m. UTC
The ID map may clash with other parts of the Xen virtual memory layout.
At the moment, Xen is handling the clash by only creating a mapping to
the runtime virtual address before enabling the MMU.

The rest of the mappings (such as the fixmap) will be mapped after the
MMU is enabled. However, the code doing the mapping is not safe as it
replace mapping without using the Break-Before-Make sequence.

As the ID map can be anywhere in the memory, it is easier to remove all
the entries added as soon as the ID map is not used rather than adding
the Break-Before-Make sequence everywhere.

It is difficult to track where exactly the ID map was created without a
full rework of create_page_tables(). Instead, introduce a new function
remove_id_map() will look where is the top-level entry for the ID map
and remove it.

The new function is only called for the boot CPU. Secondary CPUs will
switch directly to the runtime page-tables so there are no need to
remove the ID mapping. Note that this still doesn't make the Secondary
CPUs path safe but it is not making it worst.

---
    Note that the comment refers to the patch  "xen/arm: tlbflush: Rework
    TLB helpers" under review (see [1]).

    Furthermore, it is very likely we will need to re-introduce the ID
    map to cater secondary CPUs boot and suspend/resume. For now, the
    attempt is to make boot CPU path fully Arm Arm compliant.

[1] https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg01134.html
---
 xen/arch/arm/arm64/head.S | 86 ++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 71 insertions(+), 15 deletions(-)

Comments

Stefano Stabellini June 26, 2019, 8:25 p.m. UTC | #1
On Mon, 10 Jun 2019, Julien Grall wrote:
> The ID map may clash with other parts of the Xen virtual memory layout.
> At the moment, Xen is handling the clash by only creating a mapping to
> the runtime virtual address before enabling the MMU.
> 
> The rest of the mappings (such as the fixmap) will be mapped after the
> MMU is enabled. However, the code doing the mapping is not safe as it
> replace mapping without using the Break-Before-Make sequence.
> 
> As the ID map can be anywhere in the memory, it is easier to remove all
> the entries added as soon as the ID map is not used rather than adding
> the Break-Before-Make sequence everywhere.

I think it is a good idea, but I would ask you to mention 1:1 map
instead of "ID map" in comments and commit messages because that is the
wording we used in all comments so far (see the one in
create_page_tables and mm.c). It is easier to grep and refer to if we
use the same nomenclature. Note that I don't care about which
nomenclature we decide to use, I am only asking for consistency.
Otherwise, it would also work if you say it both way at least once:

 The ID map (1:1 map) may clash [...]


> It is difficult to track where exactly the ID map was created without a
> full rework of create_page_tables(). Instead, introduce a new function
> remove_id_map() will look where is the top-level entry for the ID map
> and remove it.

Do you think it would be worth simplifying this code below by preserving
where/how the ID map was created? We could repurpose x25 for that,
carrying for instance the address of the ID map section slot or a code
number to specify which case we are dealing with. We should be able to
turn remove_id_map into only ~5 lines?


> The new function is only called for the boot CPU. Secondary CPUs will
> switch directly to the runtime page-tables so there are no need to
> remove the ID mapping. Note that this still doesn't make the Secondary
> CPUs path safe but it is not making it worst.
> 
> ---
>     Note that the comment refers to the patch  "xen/arm: tlbflush: Rework
>     TLB helpers" under review (see [1]).
> 
>     Furthermore, it is very likely we will need to re-introduce the ID
>     map to cater secondary CPUs boot and suspend/resume. For now, the
>     attempt is to make boot CPU path fully Arm Arm compliant.
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg01134.html
> ---
>  xen/arch/arm/arm64/head.S | 86 ++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 71 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 192af3e8a2..96e85f8834 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -300,6 +300,13 @@ real_start_efi:
>          ldr   x0, =primary_switched
>          br    x0
>  primary_switched:
> +        /*
> +         * The ID map may clash with other parts of the Xen virtual memory
> +         * layout. As it is not used anymore, remove it completely to
> +         * avoid having to worry about replacing existing mapping
> +         * afterwards.
> +         */
> +        bl    remove_id_map
>          bl    setup_fixmap
>  #ifdef CONFIG_EARLY_PRINTK
>          /* Use a virtual address to access the UART. */
> @@ -632,10 +639,68 @@ enable_mmu:
>          ret
>  ENDPROC(enable_mmu)
>  
> +/*
> + * Remove the ID map for the page-tables. It is not easy to keep track
> + * where the ID map was mapped, so we will look for the top-level entry
> + * exclusive to the ID Map and remove it.
> + *
> + * Inputs:
> + *   x19: paddr(start)
> + *
> + * Clobbers x0 - x1
> + */
> +remove_id_map:
> +        /*
> +         * Find the zeroeth slot used. Remove the entry from zeroeth
> +         * table if the slot is not 0. For slot 0, the ID map was either
> +         * done in first or second table.
> +         */
> +        lsr   x1, x19, #ZEROETH_SHIFT   /* x1 := zeroeth slot */
> +        cbz   x1, 1f
> +        /* It is not in slot 0, remove the entry */
> +        ldr   x0, =boot_pgtable         /* x0 := root table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     id_map_removed
> +
> +1:
> +        /*
> +         * Find the first slot used. Remove the entry for the first
> +         * table if the slot is not 0. For slot 0, the ID map was done
> +         * in the second table.
> +         */
> +        lsr   x1, x19, #FIRST_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cbz   x1, 1f
> +        /* It is not in slot 0, remove the entry */
> +        ldr   x0, =boot_first           /* x0 := first table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     id_map_removed
> +
> +1:
> +        /*
> +         * Find the second slot used. Remove the entry for the first
> +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
> +         * For slot 1, it means the ID map was not created.
> +         */
> +        lsr   x1, x19, #SECOND_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cmp   x1, #1
> +        beq   id_map_removed
> +        /* It is not in slot 1, remove the entry */
> +        ldr   x0, =boot_second          /* x0 := second table */
> +        str   xzr, [x0, x1, lsl #3]
> +
> +id_map_removed:
> +        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */
> +        dsb   nshst
> +        tlbi  alle2
> +        dsb   nsh
> +        isb
> +
> +        ret
> +ENDPROC(remove_id_map)
> +
>  setup_fixmap:
> -        /* Now we can install the fixmap and dtb mappings, since we
> -         * don't need the 1:1 map any more */
> -        dsb   sy
>  #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Add UART to the fixmap table */
>          ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
> @@ -653,19 +718,10 @@ setup_fixmap:
>          ldr   x1, =FIXMAP_ADDR(0)
>          lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
> -#endif
>  
> -        /*
> -         * Flush the TLB in case the 1:1 mapping happens to clash with
> -         * the virtual addresses used by the fixmap or DTB.
> -         */
> -        dsb   sy                     /* Ensure any page table updates made above
> -                                      * have occurred. */
> -
> -        isb
> -        tlbi  alle2
> -        dsb   sy                     /* Ensure completion of TLB flush */
> -        isb
> +        /* Ensure any page table updates made above have occurred */
> +        dsb   nshst
> +#endif
>          ret
>  ENDPROC(setup_fixmap)
>  
> -- 
> 2.11.0
>
Julien Grall June 26, 2019, 8:39 p.m. UTC | #2
Hi Stefano,

On 6/26/19 9:25 PM, Stefano Stabellini wrote:
> On Mon, 10 Jun 2019, Julien Grall wrote:
>> The ID map may clash with other parts of the Xen virtual memory layout.
>> At the moment, Xen is handling the clash by only creating a mapping to
>> the runtime virtual address before enabling the MMU.
>>
>> The rest of the mappings (such as the fixmap) will be mapped after the
>> MMU is enabled. However, the code doing the mapping is not safe as it
>> replace mapping without using the Break-Before-Make sequence.
>>
>> As the ID map can be anywhere in the memory, it is easier to remove all
>> the entries added as soon as the ID map is not used rather than adding
>> the Break-Before-Make sequence everywhere.
> 
> I think it is a good idea, but I would ask you to mention 1:1 map
> instead of "ID map" in comments and commit messages because that is the
> wording we used in all comments so far (see the one in
> create_page_tables and mm.c). It is easier to grep and refer to if we
> use the same nomenclature. Note that I don't care about which
> nomenclature we decide to use, I am only asking for consistency.
> Otherwise, it would also work if you say it both way at least once:
> 
>   The ID map (1:1 map) may clash [...]

I would rather drop the wording 1:1 as this is confusing. It is also not 
trivial to find anything on google typing "1:1".

> 
> 
>> It is difficult to track where exactly the ID map was created without a
>> full rework of create_page_tables(). Instead, introduce a new function
>> remove_id_map() will look where is the top-level entry for the ID map
>> and remove it.
> 
> Do you think it would be worth simplifying this code below by preserving
> where/how the ID map was created? We could repurpose x25 for that,
> carrying for instance the address of the ID map section slot or a code
> number to specify which case we are dealing with. We should be able to
> turn remove_id_map into only ~5 lines?

I thought about it but the current implementation of create_map_tables() 
is quite awful to read. So the less I touch this function, the better I 
feel :).

I have some rework for the create_page_tables() which simplify it a lot. 
Yet, I am not entirely sure it is worth to spend time trying to simplify 
remove_id_map. This is unlikely to make the boot significantly faster 
and I don't expect the function to survive more than a release as the ID 
map as to be kept in place (for secondary boot and suspend/resume).

The only reason it is removed now is because it clashes with other 
mapping we may do.

Cheers,
Andrew Cooper June 26, 2019, 8:44 p.m. UTC | #3
On 26/06/2019 21:39, Julien Grall wrote:
> On 6/26/19 9:25 PM, Stefano Stabellini wrote:
>> On Mon, 10 Jun 2019, Julien Grall wrote:
>>> The ID map may clash with other parts of the Xen virtual memory layout.
>>> At the moment, Xen is handling the clash by only creating a mapping to
>>> the runtime virtual address before enabling the MMU.
>>>
>>> The rest of the mappings (such as the fixmap) will be mapped after the
>>> MMU is enabled. However, the code doing the mapping is not safe as it
>>> replace mapping without using the Break-Before-Make sequence.
>>>
>>> As the ID map can be anywhere in the memory, it is easier to remove all
>>> the entries added as soon as the ID map is not used rather than adding
>>> the Break-Before-Make sequence everywhere.
>>
>> I think it is a good idea, but I would ask you to mention 1:1 map
>> instead of "ID map" in comments and commit messages because that is the
>> wording we used in all comments so far (see the one in
>> create_page_tables and mm.c). It is easier to grep and refer to if we
>> use the same nomenclature. Note that I don't care about which
>> nomenclature we decide to use, I am only asking for consistency.
>> Otherwise, it would also work if you say it both way at least once:
>>
>>   The ID map (1:1 map) may clash [...]
>
> I would rather drop the wording 1:1 as this is confusing. It is also
> not trivial to find anything on google typing "1:1".

"one-to-one mapping", or "identity map" are both common terminology. 
1:1 is a common representation for the former, whereas ID is not a
abbreviation of "Identity".

If you don't want to use 1:1, then you need to say "The identity map" to
retain clarity.

~Andrew
Stefano Stabellini June 27, 2019, 6:55 p.m. UTC | #4
On Mon, 10 Jun 2019, Julien Grall wrote:
> The ID map may clash with other parts of the Xen virtual memory layout.
> At the moment, Xen is handling the clash by only creating a mapping to
> the runtime virtual address before enabling the MMU.
> 
> The rest of the mappings (such as the fixmap) will be mapped after the
> MMU is enabled. However, the code doing the mapping is not safe as it
> replace mapping without using the Break-Before-Make sequence.
> 
> As the ID map can be anywhere in the memory, it is easier to remove all
> the entries added as soon as the ID map is not used rather than adding
> the Break-Before-Make sequence everywhere.
> 
> It is difficult to track where exactly the ID map was created without a
> full rework of create_page_tables(). Instead, introduce a new function
> remove_id_map() will look where is the top-level entry for the ID map
> and remove it.
> 
> The new function is only called for the boot CPU. Secondary CPUs will
> switch directly to the runtime page-tables so there are no need to
> remove the ID mapping. Note that this still doesn't make the Secondary
> CPUs path safe but it is not making it worst.
> 
> ---
>     Note that the comment refers to the patch  "xen/arm: tlbflush: Rework
>     TLB helpers" under review (see [1]).
> 
>     Furthermore, it is very likely we will need to re-introduce the ID
>     map to cater secondary CPUs boot and suspend/resume. For now, the
>     attempt is to make boot CPU path fully Arm Arm compliant.
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg01134.html
> ---
>  xen/arch/arm/arm64/head.S | 86 ++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 71 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 192af3e8a2..96e85f8834 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -300,6 +300,13 @@ real_start_efi:
>          ldr   x0, =primary_switched
>          br    x0
>  primary_switched:
> +        /*
> +         * The ID map may clash with other parts of the Xen virtual memory
> +         * layout. As it is not used anymore, remove it completely to
> +         * avoid having to worry about replacing existing mapping
> +         * afterwards.
> +         */
> +        bl    remove_id_map
>          bl    setup_fixmap
>  #ifdef CONFIG_EARLY_PRINTK
>          /* Use a virtual address to access the UART. */
> @@ -632,10 +639,68 @@ enable_mmu:
>          ret
>  ENDPROC(enable_mmu)
>  
> +/*
> + * Remove the ID map for the page-tables. It is not easy to keep track
> + * where the ID map was mapped, so we will look for the top-level entry
> + * exclusive to the ID Map and remove it.
> + *
> + * Inputs:
> + *   x19: paddr(start)
> + *
> + * Clobbers x0 - x1
> + */
> +remove_id_map:
> +        /*
> +         * Find the zeroeth slot used. Remove the entry from zeroeth
> +         * table if the slot is not 0. For slot 0, the ID map was either
> +         * done in first or second table.
> +         */
> +        lsr   x1, x19, #ZEROETH_SHIFT   /* x1 := zeroeth slot */
> +        cbz   x1, 1f
> +        /* It is not in slot 0, remove the entry */
> +        ldr   x0, =boot_pgtable         /* x0 := root table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     id_map_removed
> +
> +1:
> +        /*
> +         * Find the first slot used. Remove the entry for the first
> +         * table if the slot is not 0. For slot 0, the ID map was done
> +         * in the second table.
> +         */
> +        lsr   x1, x19, #FIRST_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cbz   x1, 1f
> +        /* It is not in slot 0, remove the entry */
> +        ldr   x0, =boot_first           /* x0 := first table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     id_map_removed
> +
> +1:
> +        /*
> +         * Find the second slot used. Remove the entry for the first
> +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
> +         * For slot 1, it means the ID map was not created.
> +         */
> +        lsr   x1, x19, #SECOND_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cmp   x1, #1
> +        beq   id_map_removed
> +        /* It is not in slot 1, remove the entry */
> +        ldr   x0, =boot_second          /* x0 := second table */
> +        str   xzr, [x0, x1, lsl #3]

Wouldn't it be a bit more reliable if we checked whether the slot in
question for x19 (whether zero, first, second) is a pagetable pointer or
section map, then zero it if it is a section map, otherwise go down one
level? If we did it this way it would be independent from the way
create_page_tables is written.

With the current code, we are somewhat reliant on the behavior of
create_page_tables, because we rely on the position of the slot for
the ID map? Where the assumption for instance is that at level one, if
the slot is zero, then we need to go down a level, etc. Instead, if we
checked if the slot is a section map, we could remove it immediately, if
it is a pagetable pointer, we proceed. The code should be similar in
complexity and LOC, but it would be more robust.

Something like the following, in pseudo-uncompiled assembly:

     lsr   x1, x19, #FIRST_SHIFT
     ldr   x0, =boot_first           /* x0 := first table */
     ldr   x2, [x0, x1, lsl #3]
     # check x2 against #PT_MEM
     cbz   x2, 1f
     str   xzr, [x0, x1, lsl #3]
     b     id_map_removed


> +id_map_removed:
> +        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */

Do you mean xen/include/asm-arm/arm64/flushtlb.h? I can't find the
explanation you are referring to.


> +        dsb   nshst
> +        tlbi  alle2
> +        dsb   nsh
> +        isb
> +
> +        ret
> +ENDPROC(remove_id_map)
> +
>  setup_fixmap:
> -        /* Now we can install the fixmap and dtb mappings, since we
> -         * don't need the 1:1 map any more */
> -        dsb   sy
>  #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Add UART to the fixmap table */
>          ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
> @@ -653,19 +718,10 @@ setup_fixmap:
>          ldr   x1, =FIXMAP_ADDR(0)
>          lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
> -#endif
>  
> -        /*
> -         * Flush the TLB in case the 1:1 mapping happens to clash with
> -         * the virtual addresses used by the fixmap or DTB.
> -         */
> -        dsb   sy                     /* Ensure any page table updates made above
> -                                      * have occurred. */
> -
> -        isb
> -        tlbi  alle2
> -        dsb   sy                     /* Ensure completion of TLB flush */
> -        isb
> +        /* Ensure any page table updates made above have occurred */
> +        dsb   nshst
> +#endif
>          ret
>  ENDPROC(setup_fixmap)
>  
> -- 
> 2.11.0
>
Julien Grall June 27, 2019, 7:30 p.m. UTC | #5
Hi Stefano,

On 6/27/19 7:55 PM, Stefano Stabellini wrote:
> On Mon, 10 Jun 2019, Julien Grall wrote:
>> +1:
>> +        /*
>> +         * Find the second slot used. Remove the entry for the first
>> +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
>> +         * For slot 1, it means the ID map was not created.
>> +         */
>> +        lsr   x1, x19, #SECOND_SHIFT
>> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
>> +        cmp   x1, #1
>> +        beq   id_map_removed
>> +        /* It is not in slot 1, remove the entry */
>> +        ldr   x0, =boot_second          /* x0 := second table */
>> +        str   xzr, [x0, x1, lsl #3]
> 
> Wouldn't it be a bit more reliable if we checked whether the slot in
> question for x19 (whether zero, first, second) is a pagetable pointer or
> section map, then zero it if it is a section map, otherwise go down one
> level? If we did it this way it would be independent from the way
> create_page_tables is written.

Your suggestion will not comply with the architecture compliance and how 
Xen is/will be working after the full rework. We want to remove 
everything (mapping + table) added specifically for the 1:1 mapping.

Otherwise, you may end up in a position where boot_first_id is still in 
place. We would need to use the break-before-make sequence in subsequent 
code if we were about to insert 1GB mapping at the same place.

After my rework, we would have virtually no place where 
break-before-make will be necessary as it will enforce all the mappings 
to be destroyed before hand. So I would rather avoid to make a specific 
case for the 1:1 mapping.

As a side note, the current code for the 1:1 mapping is completely wrong 
as using 1GB (or even 2MB) mapping may result to map MMIO region (or 
reserved-region). This may result to cache problem. I have this 
partially fixed on for the next version of series (see [1]).

> 
> With the current code, we are somewhat reliant on the behavior of
> create_page_tables, because we rely on the position of the slot for
> the ID map? Where the assumption for instance is that at level one, if
> the slot is zero, then we need to go down a level, etc. Instead, if we
> checked if the slot is a section map, we could remove it immediately, if
> it is a pagetable pointer, we proceed. The code should be similar in
> complexity and LOC, but it would be more robust.

See above :).

> 
> Something like the following, in pseudo-uncompiled assembly:
> 
>       lsr   x1, x19, #FIRST_SHIFT
>       ldr   x0, =boot_first           /* x0 := first table */
>       ldr   x2, [x0, x1, lsl #3]
>       # check x2 against #PT_MEM
>       cbz   x2, 1f
>       str   xzr, [x0, x1, lsl #3]
>       b     id_map_removed
> 
> 
>> +id_map_removed:
>> +        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */
> 
> Do you mean xen/include/asm-arm/arm64/flushtlb.h? I can't find the
> explanation you are referring to.

The big comment at the top of the header:

/*
  * Every invalidation operation use the following patterns:
  *
  * DSB ISHST        // Ensure prior page-tables updates have completed
  * TLBI...          // Invalidate the TLB
  * DSB ISH          // Ensure the TLB invalidation has completed
  * ISB              // See explanation below
  *
  * For Xen page-tables the ISB will discard any instructions fetched
  * from the old mappings.
  *
  * For the Stage-2 page-tables the ISB ensures the completion of the DSB
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
  */

Note that we are using nsh (and not ish) because we are using local TLB 
flush (see page D5-230 ARM DDI 0487D.a). For convenience here is the text:

"In all cases in this section where a DMB or DSB is referred to, it 
refers to a DMB or DSB whose required access type is
both loads and stores. A DSB NSH is sufficient to ensure completion of 
TLB maintenance instructions that apply to a
single PE. A DSB ISH is sufficient to ensure completion of TLB 
maintenance instructions that apply to PEs in the
same Inner Shareable domain."

I discovered this section after the changes in flushtlb.h has been 
merged. But I am thinking to do a follow-up the local TLB flush code.

> 
> 
>> +        dsb   nshst
>> +        tlbi  alle2
>> +        dsb   nsh
>> +        isb
>> +
>> +        ret
>> +ENDPROC(remove_id_map)

[...]

[1] Rework for create_page_tables

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index a79ae54822..c019dd3e04 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -483,6 +483,60 @@ cpu_init:
  ENDPROC(cpu_init)

  /*
+ * Macro to create a page table entry in \ptbl to \tbl
+ *
+ * ptbl:    table symbol where the entry will be created
+ * tbl:     table symbol to point to
+ * virt:    virtual address
+ * shift:   #imm page table shift
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ *
+ * Preserves \virt
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Also use x20 for the phys offset.
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_table_entry, ptbl, tbl, virt, shift, tmp1, tmp2, tmp3
+        lsr   \tmp1, \virt, #\shift
+        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
+        load_paddr \tmp2, \tbl
+        mov   \tmp3, #PT_PT                 /* \tmp3 := right for 
linear PT */
+        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
+        adr_l \tmp2, \ptbl
+        str   \tmp3, [\tmp2, \tmp1, lsl #3]
+.endm
+
+/*
+ * Macro to create a mapping entry in \tbl to \paddr. Only mapping in 3rd
+ * level table is supported.
+ *
+ * tbl:     table symbol where the entry will be created
+ * virt:    virtual address
+ * paddr:   physical address (should be page aligned)
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ * type:    mapping type. If not specified it will be normal memory 
(PT_MEM_L3)
+ *
+ * Preserves \virt, \paddr
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_mapping_entry, tbl, virt, paddr, tmp1, tmp2, tmp3, 
type=PT_MEM_L3
+        lsr   \tmp1, \virt, #THIRD_SHIFT
+        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
+        mov   \tmp2, #\type                 /* \tmp2 := right for 
section PT */
+        orr   \tmp2, \tmp2, \paddr          /*          + paddr */
+        adr_l \tmp3, \tbl
+        str   \tmp2, [\tmp3, \tmp1, lsl #3]
+.endm
+
+/*
   * Rebuild the boot pagetable's first-level entries. The structure
   * is described in mm.c.
   *
@@ -495,100 +549,17 @@ ENDPROC(cpu_init)
   *   x19: paddr(start)
   *   x20: phys offset
   *
- * Clobbers x0 - x4, x25
- *
- * Register usage within this function:
- *   x25: Identity map in place
+ * Clobbers x0 - x4
   */
  create_page_tables:
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   x19, #XEN_VIRT_START
-        cset  x25, eq                /* x25 := identity map in place, 
or not */
-
-        load_paddr x4, boot_pgtable
-
-        /* Setup boot_pgtable: */
-        load_paddr x1, boot_first
-
-        /* ... map boot_first in boot_pgtable[0] */
-        mov   x3, #PT_PT             /* x2 := table map of boot_first */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #0]           /* Map it in slot 0 */
-
-        /* ... map of paddr(start) in boot_pgtable+boot_first_id */
-        lsr   x1, x19, #ZEROETH_SHIFT/* Offset of base paddr in 
boot_pgtable */
-        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
-                                      * or boot_second later on */
-
-        /*
-         * Level zero does not support superpage mappings, so we have
-         * to use an extra first level page in which we create a 1GB 
mapping.
-         */
-        load_paddr x2, boot_first_id
-
-        mov   x3, #PT_PT             /* x2 := table map of boot_first_id */
-        orr   x2, x2, x3             /*       + rights for linear PT */
-        str   x2, [x4, x1, lsl #3]
-
-        load_paddr x4, boot_first_id
-
-        lsr   x1, x19, #FIRST_SHIFT  /* x1 := Offset of base paddr in 
boot_first_id */
-        lsl   x2, x1, #FIRST_SHIFT   /* x2 := Base address for 1GB 
mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        and   x1, x1, #LPAE_ENTRY_MASK /* x1 := Slot offset */
-        str   x2, [x4, x1, lsl #3]   /* Mapping of paddr(start) */
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_first: */
-        load_paddr x4, boot_first   /* Next level into boot_first */
-
-        /* ... map boot_second in boot_first[0] */
-        load_paddr x1, boot_second
-        mov   x3, #PT_PT             /* x2 := table map of boot_second */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #0]           /* Map it in slot 0 */
-
-        /* ... map of paddr(start) in boot_first */
-        cbnz  x25, 1f                /* x25 is set if already created */
-        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in 
boot_first */
-        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
-        cbz   x1, 1f                 /* It's in slot 0, map in 
boot_second */
-
-        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_second: */
-        load_paddr x4, boot_second
-
-        /* ... map boot_third in boot_second[1] */
-        load_paddr x1, boot_third
-        mov   x3, #PT_PT             /* x2 := table map of boot_third */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #8]           /* Map it in slot 1 */
-
-        /* ... map of paddr(start) in boot_second */
-        cbnz  x25, 1f                /* x25 is set if already created */
-        lsr   x2, x19, #SECOND_SHIFT /* x2 := Offset of base paddr in 
boot_second */
-        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
-        cmp   x1, #1
-        b.eq  virtphys_clash         /* It's in slot 1, which we cannot 
handle */
+        /* Prepare the page-tables for mapping Xen */
+        ldr   x0, =XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_first, x0, ZEROETH_SHIFT, 
x1, x2, x3
+        create_table_entry boot_first, boot_second, x0, FIRST_SHIFT, 
x1, x2, x3
+        create_table_entry boot_second, boot_third, x0, SECOND_SHIFT, 
x1, x2, x3

-        lsl   x2, x2, #SECOND_SHIFT  /* Base address for 2MB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_third: */
-        load_paddr x4, boot_third
+        /* Map Xen */
+        adr_l x4, boot_third

          lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
          lsl   x2, x2, #THIRD_SHIFT
@@ -603,21 +574,68 @@ create_page_tables:
          cmp   x1, #(LPAE_ENTRIES<<3) /* 512 entries per page */
          b.lt  1b

-        /* Defer fixmap and dtb mapping until after paging enabled, to
-         * avoid them clashing with the 1:1 mapping. */
+        /*
+         * If Xen is loaded at exactly XEN_VIRT_START then we don't
+         * need an additional 1:1 mapping, the virtual mapping will
+         * suffice.
+         */
+        cmp   x19, #XEN_VIRT_START
+        bne   1f
+        ret
+1:
+        /*
+         * Only the first page of Xen will be part of the 1:1 mapping.
+         * All the boot_*_id tables are linked together even if they may
+         * not be all used. They will then be linked to the boot page
+         * tables at the correct level.
+         */
+        create_table_entry boot_first_id, boot_second_id, x19, 
FIRST_SHIFT, x0, x1, x2
+        create_table_entry boot_second_id, boot_third_id, x19, 
SECOND_SHIFT, x0, x1, x2
+        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
+
+        /*
+         * Find the zeroeth slot used. Link boot_first_id into
+         * boot_pgtable if the slot is not 0. For slot 0, the tables
+         * associated with the 1:1 mapping will need to be linked in
+         * boot_first or boot_second.
+         */
+        lsr   x0, x19, #ZEROETH_SHIFT   /* x0 := zeroeth slot */
+        cbz   x0, 1f
+        /* It is not in slot 0, Link boot_first_id into boot_pgtable */
+        create_table_entry boot_pgtable, boot_first_id, x19, 
ZEROETH_SHIFT, x0, x1, x2
+        ret
+
+1:
+        /*
+         * Find the first slot used. Link boot_second_id into boot_first
+         * if the slot is not 0. For slot 0, the tables associated with
+         * the 1:1 mapping will need to be linked in boot_second.
+         */
+        lsr   x0, x19, #FIRST_SHIFT
+        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
+        cbz   x0, 1f
+        /* It is not in slot 0, Link boot_second_id into boot_first */
+        create_table_entry boot_first, boot_second_id, x19, 
FIRST_SHIFT, x0, x1, x2
+        ret

-        /* boot pagetable setup complete */
+1:
+        /*
+         * Find the second slot used. Link boot_third_id into boot_second
+         * if the slot is not 1 (runtime Xen mapping is 2M - 4M).
+         * For slot 1, Xen is not yet able to handle it.
+         */
+        lsr   x0, x19, #SECOND_SHIFT
+        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
+        cmp   x0, #1
+        beq   virtphys_clash
+        /* It is not in slot 1, link boot_third_id into boot_second */
+        create_table_entry boot_second, boot_third_id, x19, 
SECOND_SHIFT, x0, x1, x2
+        ret

-        cbnz  x25, 1f                /* Did we manage to create an 
identity mapping ? */
-        PRINT("Unable to build boot page tables - Failed to identity 
map Xen.\r\n")
-        b     fail
  virtphys_clash:
          /* Identity map clashes with boot_third, which we cannot 
handle yet */
          PRINT("- Unable to build boot page tables - virt and phys 
addresses clash. -\r\n")
          b     fail
-
-1:
-        ret
  ENDPROC(create_page_tables)

  /*
@@ -719,28 +737,15 @@ ENDPROC(remove_identity_mapping)
   * The fixmap cannot be mapped in create_page_tables because it may
   * clash with the 1:1 mapping.
   *
- * Clobbers x1 - x4
+ * Clobbers x0 - x3
   */
  setup_fixmap:
  #ifdef CONFIG_EARLY_PRINTK
-        /* Add UART to the fixmap table */
-        ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
-        lsr   x2, x23, #THIRD_SHIFT
-        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
-        mov   x3, #PT_DEV_L3
-        orr   x2, x2, x3             /* x2 := 4K dev map including UART */
-        str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first 
fixmap's slot */
+        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
+        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, 
type=PT_DEV_L3
  #endif
-
-        /* Map fixmap into boot_second */
-        ldr   x4, =boot_second       /* x4 := vaddr (boot_second) */
-        load_paddr x2, xen_fixmap
-        mov   x3, #PT_PT
-        orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
-        ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
-        str   x2, [x4, x1]           /* Map it in the fixmap's slot */
-
+        ldr   x0, =FIXMAP_ADDR(0)
+        create_table_entry boot_second, xen_fixmap, x0, SECOND_SHIFT, 
x1, x2, x3
          /* Ensure any page table updates made above have occurred */
          dsb   nshst
          ret
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index c2f1795a71..bc1824d3ca 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -107,6 +107,8 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
  DEFINE_BOOT_PAGE_TABLE(boot_first);
  DEFINE_BOOT_PAGE_TABLE(boot_first_id);
  #endif
+DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+DEFINE_BOOT_PAGE_TABLE(boot_third_id);
  DEFINE_BOOT_PAGE_TABLE(boot_second);
  DEFINE_BOOT_PAGE_TABLE(boot_third);
Stefano Stabellini June 28, 2019, 12:36 a.m. UTC | #6
On Wed, 26 Jun 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 6/26/19 9:25 PM, Stefano Stabellini wrote:
> > On Mon, 10 Jun 2019, Julien Grall wrote:
> > > The ID map may clash with other parts of the Xen virtual memory layout.
> > > At the moment, Xen is handling the clash by only creating a mapping to
> > > the runtime virtual address before enabling the MMU.
> > > 
> > > The rest of the mappings (such as the fixmap) will be mapped after the
> > > MMU is enabled. However, the code doing the mapping is not safe as it
> > > replace mapping without using the Break-Before-Make sequence.
> > > 
> > > As the ID map can be anywhere in the memory, it is easier to remove all
> > > the entries added as soon as the ID map is not used rather than adding
> > > the Break-Before-Make sequence everywhere.
> > 
> > I think it is a good idea, but I would ask you to mention 1:1 map
> > instead of "ID map" in comments and commit messages because that is the
> > wording we used in all comments so far (see the one in
> > create_page_tables and mm.c). It is easier to grep and refer to if we
> > use the same nomenclature. Note that I don't care about which
> > nomenclature we decide to use, I am only asking for consistency.
> > Otherwise, it would also work if you say it both way at least once:
> > 
> >   The ID map (1:1 map) may clash [...]
> 
> I would rather drop the wording 1:1 as this is confusing. It is also not
> trivial to find anything on google typing "1:1".

That's fine too
Julien Grall July 10, 2019, 7:39 p.m. UTC | #7
Hi,

@Stefano, I am going through the series and noticed you didn't give any 
update. Could you confirm if my reply makes sense?

Cheers,

On 6/27/19 8:30 PM, Julien Grall wrote:
> Hi Stefano,
> 
> On 6/27/19 7:55 PM, Stefano Stabellini wrote:
>> On Mon, 10 Jun 2019, Julien Grall wrote:
>>> +1:
>>> +        /*
>>> +         * Find the second slot used. Remove the entry for the first
>>> +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
>>> +         * For slot 1, it means the ID map was not created.
>>> +         */
>>> +        lsr   x1, x19, #SECOND_SHIFT
>>> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
>>> +        cmp   x1, #1
>>> +        beq   id_map_removed
>>> +        /* It is not in slot 1, remove the entry */
>>> +        ldr   x0, =boot_second          /* x0 := second table */
>>> +        str   xzr, [x0, x1, lsl #3]
>>
>> Wouldn't it be a bit more reliable if we checked whether the slot in
>> question for x19 (whether zero, first, second) is a pagetable pointer or
>> section map, then zero it if it is a section map, otherwise go down one
>> level? If we did it this way it would be independent from the way
>> create_page_tables is written.
> 
> Your suggestion will not comply with the architecture compliance and how 
> Xen is/will be working after the full rework. We want to remove 
> everything (mapping + table) added specifically for the 1:1 mapping.
> 
> Otherwise, you may end up in a position where boot_first_id is still in 
> place. We would need to use the break-before-make sequence in subsequent 
> code if we were about to insert 1GB mapping at the same place.
> 
> After my rework, we would have virtually no place where 
> break-before-make will be necessary as it will enforce all the mappings 
> to be destroyed before hand. So I would rather avoid to make a specific 
> case for the 1:1 mapping.
> 
> As a side note, the current code for the 1:1 mapping is completely wrong 
> as using 1GB (or even 2MB) mapping may result to map MMIO region (or 
> reserved-region). This may result to cache problem. I have this 
> partially fixed on for the next version of series (see [1]).
> 
>>
>> With the current code, we are somewhat reliant on the behavior of
>> create_page_tables, because we rely on the position of the slot for
>> the ID map? Where the assumption for instance is that at level one, if
>> the slot is zero, then we need to go down a level, etc. Instead, if we
>> checked if the slot is a section map, we could remove it immediately, if
>> it is a pagetable pointer, we proceed. The code should be similar in
>> complexity and LOC, but it would be more robust.
> 
> See above :).
> 
>>
>> Something like the following, in pseudo-uncompiled assembly:
>>
>>       lsr   x1, x19, #FIRST_SHIFT
>>       ldr   x0, =boot_first           /* x0 := first table */
>>       ldr   x2, [x0, x1, lsl #3]
>>       # check x2 against #PT_MEM
>>       cbz   x2, 1f
>>       str   xzr, [x0, x1, lsl #3]
>>       b     id_map_removed
>>
>>
>>> +id_map_removed:
>>> +        /* See asm-arm/arm64/flushtlb.h for the explanation of the 
>>> sequence. */
>>
>> Do you mean xen/include/asm-arm/arm64/flushtlb.h? I can't find the
>> explanation you are referring to.
> 
> The big comment at the top of the header:
> 
> /*
>   * Every invalidation operation use the following patterns:
>   *
>   * DSB ISHST        // Ensure prior page-tables updates have completed
>   * TLBI...          // Invalidate the TLB
>   * DSB ISH          // Ensure the TLB invalidation has completed
>   * ISB              // See explanation below
>   *
>   * For Xen page-tables the ISB will discard any instructions fetched
>   * from the old mappings.
>   *
>   * For the Stage-2 page-tables the ISB ensures the completion of the DSB
>   * (and therefore the TLB invalidation) before continuing. So we know
>   * the TLBs cannot contain an entry for a mapping we may have removed.
>   */
> 
> Note that we are using nsh (and not ish) because we are using local TLB 
> flush (see page D5-230 ARM DDI 0487D.a). For convenience here is the text:
> 
> "In all cases in this section where a DMB or DSB is referred to, it 
> refers to a DMB or DSB whose required access type is
> both loads and stores. A DSB NSH is sufficient to ensure completion of 
> TLB maintenance instructions that apply to a
> single PE. A DSB ISH is sufficient to ensure completion of TLB 
> maintenance instructions that apply to PEs in the
> same Inner Shareable domain."
> 
> I discovered this section after the changes in flushtlb.h has been 
> merged. But I am thinking to do a follow-up the local TLB flush code.
> 
>>
>>
>>> +        dsb   nshst
>>> +        tlbi  alle2
>>> +        dsb   nsh
>>> +        isb
>>> +
>>> +        ret
>>> +ENDPROC(remove_id_map)
> 
> [...]
> 
> [1] Rework for create_page_tables
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index a79ae54822..c019dd3e04 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -483,6 +483,60 @@ cpu_init:
>   ENDPROC(cpu_init)
> 
>   /*
> + * Macro to create a page table entry in \ptbl to \tbl
> + *
> + * ptbl:    table symbol where the entry will be created
> + * tbl:     table symbol to point to
> + * virt:    virtual address
> + * shift:   #imm page table shift
> + * tmp1:    scratch register
> + * tmp2:    scratch register
> + * tmp3:    scratch register
> + *
> + * Preserves \virt
> + * Clobbers \tmp1, \tmp2, \tmp3
> + *
> + * Also use x20 for the phys offset.
> + *
> + * Note that all parameters using registers should be distinct.
> + */
> +.macro create_table_entry, ptbl, tbl, virt, shift, tmp1, tmp2, tmp3
> +        lsr   \tmp1, \virt, #\shift
> +        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
> +        load_paddr \tmp2, \tbl
> +        mov   \tmp3, #PT_PT                 /* \tmp3 := right for 
> linear PT */
> +        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
> +        adr_l \tmp2, \ptbl
> +        str   \tmp3, [\tmp2, \tmp1, lsl #3]
> +.endm
> +
> +/*
> + * Macro to create a mapping entry in \tbl to \paddr. Only mapping in 3rd
> + * level table is supported.
> + *
> + * tbl:     table symbol where the entry will be created
> + * virt:    virtual address
> + * paddr:   physical address (should be page aligned)
> + * tmp1:    scratch register
> + * tmp2:    scratch register
> + * tmp3:    scratch register
> + * type:    mapping type. If not specified it will be normal memory 
> (PT_MEM_L3)
> + *
> + * Preserves \virt, \paddr
> + * Clobbers \tmp1, \tmp2, \tmp3
> + *
> + * Note that all parameters using registers should be distinct.
> + */
> +.macro create_mapping_entry, tbl, virt, paddr, tmp1, tmp2, tmp3, 
> type=PT_MEM_L3
> +        lsr   \tmp1, \virt, #THIRD_SHIFT
> +        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
> +        mov   \tmp2, #\type                 /* \tmp2 := right for 
> section PT */
> +        orr   \tmp2, \tmp2, \paddr          /*          + paddr */
> +        adr_l \tmp3, \tbl
> +        str   \tmp2, [\tmp3, \tmp1, lsl #3]
> +.endm
> +
> +/*
>    * Rebuild the boot pagetable's first-level entries. The structure
>    * is described in mm.c.
>    *
> @@ -495,100 +549,17 @@ ENDPROC(cpu_init)
>    *   x19: paddr(start)
>    *   x20: phys offset
>    *
> - * Clobbers x0 - x4, x25
> - *
> - * Register usage within this function:
> - *   x25: Identity map in place
> + * Clobbers x0 - x4
>    */
>   create_page_tables:
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   x19, #XEN_VIRT_START
> -        cset  x25, eq                /* x25 := identity map in place, 
> or not */
> -
> -        load_paddr x4, boot_pgtable
> -
> -        /* Setup boot_pgtable: */
> -        load_paddr x1, boot_first
> -
> -        /* ... map boot_first in boot_pgtable[0] */
> -        mov   x3, #PT_PT             /* x2 := table map of boot_first */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #0]           /* Map it in slot 0 */
> -
> -        /* ... map of paddr(start) in boot_pgtable+boot_first_id */
> -        lsr   x1, x19, #ZEROETH_SHIFT/* Offset of base paddr in 
> boot_pgtable */
> -        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
> -                                      * or boot_second later on */
> -
> -        /*
> -         * Level zero does not support superpage mappings, so we have
> -         * to use an extra first level page in which we create a 1GB 
> mapping.
> -         */
> -        load_paddr x2, boot_first_id
> -
> -        mov   x3, #PT_PT             /* x2 := table map of 
> boot_first_id */
> -        orr   x2, x2, x3             /*       + rights for linear PT */
> -        str   x2, [x4, x1, lsl #3]
> -
> -        load_paddr x4, boot_first_id
> -
> -        lsr   x1, x19, #FIRST_SHIFT  /* x1 := Offset of base paddr in 
> boot_first_id */
> -        lsl   x2, x1, #FIRST_SHIFT   /* x2 := Base address for 1GB 
> mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        and   x1, x1, #LPAE_ENTRY_MASK /* x1 := Slot offset */
> -        str   x2, [x4, x1, lsl #3]   /* Mapping of paddr(start) */
> -        mov   x25, #1                /* x25 := identity map now in 
> place */
> -
> -1:      /* Setup boot_first: */
> -        load_paddr x4, boot_first   /* Next level into boot_first */
> -
> -        /* ... map boot_second in boot_first[0] */
> -        load_paddr x1, boot_second
> -        mov   x3, #PT_PT             /* x2 := table map of boot_second */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #0]           /* Map it in slot 0 */
> -
> -        /* ... map of paddr(start) in boot_first */
> -        cbnz  x25, 1f                /* x25 is set if already created */
> -        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in 
> boot_first */
> -        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
> -        cbz   x1, 1f                 /* It's in slot 0, map in 
> boot_second */
> -
> -        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
> -        mov   x25, #1                /* x25 := identity map now in 
> place */
> -
> -1:      /* Setup boot_second: */
> -        load_paddr x4, boot_second
> -
> -        /* ... map boot_third in boot_second[1] */
> -        load_paddr x1, boot_third
> -        mov   x3, #PT_PT             /* x2 := table map of boot_third */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #8]           /* Map it in slot 1 */
> -
> -        /* ... map of paddr(start) in boot_second */
> -        cbnz  x25, 1f                /* x25 is set if already created */
> -        lsr   x2, x19, #SECOND_SHIFT /* x2 := Offset of base paddr in 
> boot_second */
> -        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
> -        cmp   x1, #1
> -        b.eq  virtphys_clash         /* It's in slot 1, which we cannot 
> handle */
> +        /* Prepare the page-tables for mapping Xen */
> +        ldr   x0, =XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_first, x0, ZEROETH_SHIFT, 
> x1, x2, x3
> +        create_table_entry boot_first, boot_second, x0, FIRST_SHIFT, 
> x1, x2, x3
> +        create_table_entry boot_second, boot_third, x0, SECOND_SHIFT, 
> x1, x2, x3
> 
> -        lsl   x2, x2, #SECOND_SHIFT  /* Base address for 2MB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
> -        mov   x25, #1                /* x25 := identity map now in 
> place */
> -
> -1:      /* Setup boot_third: */
> -        load_paddr x4, boot_third
> +        /* Map Xen */
> +        adr_l x4, boot_third
> 
>           lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
>           lsl   x2, x2, #THIRD_SHIFT
> @@ -603,21 +574,68 @@ create_page_tables:
>           cmp   x1, #(LPAE_ENTRIES<<3) /* 512 entries per page */
>           b.lt  1b
> 
> -        /* Defer fixmap and dtb mapping until after paging enabled, to
> -         * avoid them clashing with the 1:1 mapping. */
> +        /*
> +         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> +         * need an additional 1:1 mapping, the virtual mapping will
> +         * suffice.
> +         */
> +        cmp   x19, #XEN_VIRT_START
> +        bne   1f
> +        ret
> +1:
> +        /*
> +         * Only the first page of Xen will be part of the 1:1 mapping.
> +         * All the boot_*_id tables are linked together even if they may
> +         * not be all used. They will then be linked to the boot page
> +         * tables at the correct level.
> +         */
> +        create_table_entry boot_first_id, boot_second_id, x19, 
> FIRST_SHIFT, x0, x1, x2
> +        create_table_entry boot_second_id, boot_third_id, x19, 
> SECOND_SHIFT, x0, x1, x2
> +        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
> +
> +        /*
> +         * Find the zeroeth slot used. Link boot_first_id into
> +         * boot_pgtable if the slot is not 0. For slot 0, the tables
> +         * associated with the 1:1 mapping will need to be linked in
> +         * boot_first or boot_second.
> +         */
> +        lsr   x0, x19, #ZEROETH_SHIFT   /* x0 := zeroeth slot */
> +        cbz   x0, 1f
> +        /* It is not in slot 0, Link boot_first_id into boot_pgtable */
> +        create_table_entry boot_pgtable, boot_first_id, x19, 
> ZEROETH_SHIFT, x0, x1, x2
> +        ret
> +
> +1:
> +        /*
> +         * Find the first slot used. Link boot_second_id into boot_first
> +         * if the slot is not 0. For slot 0, the tables associated with
> +         * the 1:1 mapping will need to be linked in boot_second.
> +         */
> +        lsr   x0, x19, #FIRST_SHIFT
> +        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
> +        cbz   x0, 1f
> +        /* It is not in slot 0, Link boot_second_id into boot_first */
> +        create_table_entry boot_first, boot_second_id, x19, 
> FIRST_SHIFT, x0, x1, x2
> +        ret
> 
> -        /* boot pagetable setup complete */
> +1:
> +        /*
> +         * Find the second slot used. Link boot_third_id into boot_second
> +         * if the slot is not 1 (runtime Xen mapping is 2M - 4M).
> +         * For slot 1, Xen is not yet able to handle it.
> +         */
> +        lsr   x0, x19, #SECOND_SHIFT
> +        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
> +        cmp   x0, #1
> +        beq   virtphys_clash
> +        /* It is not in slot 1, link boot_third_id into boot_second */
> +        create_table_entry boot_second, boot_third_id, x19, 
> SECOND_SHIFT, x0, x1, x2
> +        ret
> 
> -        cbnz  x25, 1f                /* Did we manage to create an 
> identity mapping ? */
> -        PRINT("Unable to build boot page tables - Failed to identity 
> map Xen.\r\n")
> -        b     fail
>   virtphys_clash:
>           /* Identity map clashes with boot_third, which we cannot 
> handle yet */
>           PRINT("- Unable to build boot page tables - virt and phys 
> addresses clash. -\r\n")
>           b     fail
> -
> -1:
> -        ret
>   ENDPROC(create_page_tables)
> 
>   /*
> @@ -719,28 +737,15 @@ ENDPROC(remove_identity_mapping)
>    * The fixmap cannot be mapped in create_page_tables because it may
>    * clash with the 1:1 mapping.
>    *
> - * Clobbers x1 - x4
> + * Clobbers x0 - x3
>    */
>   setup_fixmap:
>   #ifdef CONFIG_EARLY_PRINTK
> -        /* Add UART to the fixmap table */
> -        ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
> -        lsr   x2, x23, #THIRD_SHIFT
> -        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
> -        mov   x3, #PT_DEV_L3
> -        orr   x2, x2, x3             /* x2 := 4K dev map including UART */
> -        str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first 
> fixmap's slot */
> +        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
> +        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, 
> type=PT_DEV_L3
>   #endif
> -
> -        /* Map fixmap into boot_second */
> -        ldr   x4, =boot_second       /* x4 := vaddr (boot_second) */
> -        load_paddr x2, xen_fixmap
> -        mov   x3, #PT_PT
> -        orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
> -        ldr   x1, =FIXMAP_ADDR(0)
> -        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
> -        str   x2, [x4, x1]           /* Map it in the fixmap's slot */
> -
> +        ldr   x0, =FIXMAP_ADDR(0)
> +        create_table_entry boot_second, xen_fixmap, x0, SECOND_SHIFT, 
> x1, x2, x3
>           /* Ensure any page table updates made above have occurred */
>           dsb   nshst
>           ret
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index c2f1795a71..bc1824d3ca 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -107,6 +107,8 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>   DEFINE_BOOT_PAGE_TABLE(boot_first);
>   DEFINE_BOOT_PAGE_TABLE(boot_first_id);
>   #endif
> +DEFINE_BOOT_PAGE_TABLE(boot_second_id);
> +DEFINE_BOOT_PAGE_TABLE(boot_third_id);
>   DEFINE_BOOT_PAGE_TABLE(boot_second);
>   DEFINE_BOOT_PAGE_TABLE(boot_third);
> 
>
Stefano Stabellini July 30, 2019, 5:33 p.m. UTC | #8
On Thu, 27 Jun 2019, Julien Grall wrote:
> On 6/27/19 7:55 PM, Stefano Stabellini wrote:
> > On Mon, 10 Jun 2019, Julien Grall wrote:
> > > +1:
> > > +        /*
> > > +         * Find the second slot used. Remove the entry for the first
> > > +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
> > > +         * For slot 1, it means the ID map was not created.
> > > +         */
> > > +        lsr   x1, x19, #SECOND_SHIFT
> > > +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> > > +        cmp   x1, #1
> > > +        beq   id_map_removed
> > > +        /* It is not in slot 1, remove the entry */
> > > +        ldr   x0, =boot_second          /* x0 := second table */
> > > +        str   xzr, [x0, x1, lsl #3]
> > 
> > Wouldn't it be a bit more reliable if we checked whether the slot in
> > question for x19 (whether zero, first, second) is a pagetable pointer or
> > section map, then zero it if it is a section map, otherwise go down one
> > level? If we did it this way it would be independent from the way
> > create_page_tables is written.
> 
> Your suggestion will not comply with the architecture compliance and how Xen
> is/will be working after the full rework. We want to remove everything
> (mapping + table) added specifically for the 1:1 mapping.
> 
> Otherwise, you may end up in a position where boot_first_id is still in place.
> We would need to use the break-before-make sequence in subsequent code if we
> were about to insert 1GB mapping at the same place.
> 
> After my rework, we would have virtually no place where break-before-make will
> be necessary as it will enforce all the mappings to be destroyed before hand.
> So I would rather avoid to make a specific case for the 1:1 mapping.

I don't fully understand your explanation. I understand the final goal
of "removing everything (mapping + table) added specifically for the 1:1
mapping". I don't understand why my suggestion would be a hindrance
toward that goal, compared to what it is done in this patch.
Julien Grall July 30, 2019, 7:52 p.m. UTC | #9
Hi Stefano,

On 7/30/19 6:33 PM, Stefano Stabellini wrote:
> On Thu, 27 Jun 2019, Julien Grall wrote:
>> On 6/27/19 7:55 PM, Stefano Stabellini wrote:
>>> On Mon, 10 Jun 2019, Julien Grall wrote:
>>>> +1:
>>>> +        /*
>>>> +         * Find the second slot used. Remove the entry for the first
>>>> +         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
>>>> +         * For slot 1, it means the ID map was not created.
>>>> +         */
>>>> +        lsr   x1, x19, #SECOND_SHIFT
>>>> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
>>>> +        cmp   x1, #1
>>>> +        beq   id_map_removed
>>>> +        /* It is not in slot 1, remove the entry */
>>>> +        ldr   x0, =boot_second          /* x0 := second table */
>>>> +        str   xzr, [x0, x1, lsl #3]
>>>
>>> Wouldn't it be a bit more reliable if we checked whether the slot in
>>> question for x19 (whether zero, first, second) is a pagetable pointer or
>>> section map, then zero it if it is a section map, otherwise go down one
>>> level? If we did it this way it would be independent from the way
>>> create_page_tables is written.
>>
>> Your suggestion will not comply with the architecture compliance and how Xen
>> is/will be working after the full rework. We want to remove everything
>> (mapping + table) added specifically for the 1:1 mapping.
>>
>> Otherwise, you may end up in a position where boot_first_id is still in place.
>> We would need to use the break-before-make sequence in subsequent code if we
>> were about to insert 1GB mapping at the same place.
>>
>> After my rework, we would have virtually no place where break-before-make will
>> be necessary as it will enforce all the mappings to be destroyed before hand.
>> So I would rather avoid to make a specific case for the 1:1 mapping.
> 
> I don't fully understand your explanation. I understand the final goal
> of "removing everything (mapping + table) added specifically for the 1:1
> mapping". I don't understand why my suggestion would be a hindrance
> toward that goal, compared to what it is done in this patch.

Because, AFAICT, your suggestion will only remove the mapping and not 
the tables (such as boot_first_id). This is different from this patch 
where both mapping and tables are removed.

So yes, my suggestion is not generic, but at least it does the job that 
is expected by this function. I.e removing anything that was 
specifically created for the identity mapping.

Cheers,
Stefano Stabellini July 31, 2019, 8:40 p.m. UTC | #10
On Tue, 30 Jul 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 7/30/19 6:33 PM, Stefano Stabellini wrote:
> > On Thu, 27 Jun 2019, Julien Grall wrote:
> > > On 6/27/19 7:55 PM, Stefano Stabellini wrote:
> > > > On Mon, 10 Jun 2019, Julien Grall wrote:
> > > > > +1:
> > > > > +        /*
> > > > > +         * Find the second slot used. Remove the entry for the first
> > > > > +         * table if the slot is not 1 (runtime Xen mapping is 2M -
> > > > > 4M).
> > > > > +         * For slot 1, it means the ID map was not created.
> > > > > +         */
> > > > > +        lsr   x1, x19, #SECOND_SHIFT
> > > > > +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> > > > > +        cmp   x1, #1
> > > > > +        beq   id_map_removed
> > > > > +        /* It is not in slot 1, remove the entry */
> > > > > +        ldr   x0, =boot_second          /* x0 := second table */
> > > > > +        str   xzr, [x0, x1, lsl #3]
> > > > 
> > > > Wouldn't it be a bit more reliable if we checked whether the slot in
> > > > question for x19 (whether zero, first, second) is a pagetable pointer or
> > > > section map, then zero it if it is a section map, otherwise go down one
> > > > level? If we did it this way it would be independent from the way
> > > > create_page_tables is written.
> > > 
> > > Your suggestion will not comply with the architecture compliance and how
> > > Xen
> > > is/will be working after the full rework. We want to remove everything
> > > (mapping + table) added specifically for the 1:1 mapping.
> > > 
> > > Otherwise, you may end up in a position where boot_first_id is still in
> > > place.
> > > We would need to use the break-before-make sequence in subsequent code if
> > > we
> > > were about to insert 1GB mapping at the same place.
> > > 
> > > After my rework, we would have virtually no place where break-before-make
> > > will
> > > be necessary as it will enforce all the mappings to be destroyed before
> > > hand.
> > > So I would rather avoid to make a specific case for the 1:1 mapping.
> > 
> > I don't fully understand your explanation. I understand the final goal
> > of "removing everything (mapping + table) added specifically for the 1:1
> > mapping". I don't understand why my suggestion would be a hindrance
> > toward that goal, compared to what it is done in this patch.
> 
> Because, AFAICT, your suggestion will only remove the mapping and not the
> tables (such as boot_first_id). This is different from this patch where both
> mapping and tables are removed.
>
> So yes, my suggestion is not generic, but at least it does the job that is
> expected by this function. I.e removing anything that was specifically created
> for the identity mapping.

I understand your comment now, and of course I agree that both mapping
and tables need to be removed.

I am careful making suggestions for assembly coding because I don't
really want to suggest something that doesn't work, or even if it works
that it's worse than the original.

It should be possible to remove both the table and the mapping in a
generic way. Instead of hardcoding the assemply equivalent of "It is not
in slot 0, remove the entry", we could check whether the table offset
matches the table offset of the mapping that we want to preserve. That
way, "slot 0" would be calculate instead of hardcoded, and the code
would be pretty generic. What do you think? It should only be a small
addition.
Julien Grall July 31, 2019, 9:07 p.m. UTC | #11
Hi Stefano,

On 7/31/19 9:40 PM, Stefano Stabellini wrote:
> On Tue, 30 Jul 2019, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 7/30/19 6:33 PM, Stefano Stabellini wrote:
>>> On Thu, 27 Jun 2019, Julien Grall wrote:
>>>> On 6/27/19 7:55 PM, Stefano Stabellini wrote:
>>>>> On Mon, 10 Jun 2019, Julien Grall wrote:
>>>>>> +1:
>>>>>> +        /*
>>>>>> +         * Find the second slot used. Remove the entry for the first
>>>>>> +         * table if the slot is not 1 (runtime Xen mapping is 2M -
>>>>>> 4M).
>>>>>> +         * For slot 1, it means the ID map was not created.
>>>>>> +         */
>>>>>> +        lsr   x1, x19, #SECOND_SHIFT
>>>>>> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
>>>>>> +        cmp   x1, #1
>>>>>> +        beq   id_map_removed
>>>>>> +        /* It is not in slot 1, remove the entry */
>>>>>> +        ldr   x0, =boot_second          /* x0 := second table */
>>>>>> +        str   xzr, [x0, x1, lsl #3]
>>>>>
>>>>> Wouldn't it be a bit more reliable if we checked whether the slot in
>>>>> question for x19 (whether zero, first, second) is a pagetable pointer or
>>>>> section map, then zero it if it is a section map, otherwise go down one
>>>>> level? If we did it this way it would be independent from the way
>>>>> create_page_tables is written.
>>>>
>>>> Your suggestion will not comply with the architecture compliance and how
>>>> Xen
>>>> is/will be working after the full rework. We want to remove everything
>>>> (mapping + table) added specifically for the 1:1 mapping.
>>>>
>>>> Otherwise, you may end up in a position where boot_first_id is still in
>>>> place.
>>>> We would need to use the break-before-make sequence in subsequent code if
>>>> we
>>>> were about to insert 1GB mapping at the same place.
>>>>
>>>> After my rework, we would have virtually no place where break-before-make
>>>> will
>>>> be necessary as it will enforce all the mappings to be destroyed before
>>>> hand.
>>>> So I would rather avoid to make a specific case for the 1:1 mapping.
>>>
>>> I don't fully understand your explanation. I understand the final goal
>>> of "removing everything (mapping + table) added specifically for the 1:1
>>> mapping". I don't understand why my suggestion would be a hindrance
>>> toward that goal, compared to what it is done in this patch.
>>
>> Because, AFAICT, your suggestion will only remove the mapping and not the
>> tables (such as boot_first_id). This is different from this patch where both
>> mapping and tables are removed.
>>
>> So yes, my suggestion is not generic, but at least it does the job that is
>> expected by this function. I.e removing anything that was specifically created
>> for the identity mapping.
> 
> I understand your comment now, and of course I agree that both mapping
> and tables need to be removed.
> 
> I am careful making suggestions for assembly coding because I don't
> really want to suggest something that doesn't work, or even if it works
> that it's worse than the original.
> 
> It should be possible to remove both the table and the mapping in a
> generic way. Instead of hardcoding the assemply equivalent of "It is not
> in slot 0, remove the entry", we could check whether the table offset
> matches the table offset of the mapping that we want to preserve. That
> way, "slot 0" would be calculate instead of hardcoded, and the code
> would be pretty generic. What do you think? It should only be a small
> addition.

It should be feasible and may actually help the next step in my plan 
where I need to make Xen relocatable.

I will have a look for both the arm32 and arm64 code.

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 192af3e8a2..96e85f8834 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -300,6 +300,13 @@  real_start_efi:
         ldr   x0, =primary_switched
         br    x0
 primary_switched:
+        /*
+         * The ID map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         */
+        bl    remove_id_map
         bl    setup_fixmap
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -632,10 +639,68 @@  enable_mmu:
         ret
 ENDPROC(enable_mmu)
 
+/*
+ * Remove the ID map for the page-tables. It is not easy to keep track
+ * where the ID map was mapped, so we will look for the top-level entry
+ * exclusive to the ID Map and remove it.
+ *
+ * Inputs:
+ *   x19: paddr(start)
+ *
+ * Clobbers x0 - x1
+ */
+remove_id_map:
+        /*
+         * Find the zeroeth slot used. Remove the entry from zeroeth
+         * table if the slot is not 0. For slot 0, the ID map was either
+         * done in first or second table.
+         */
+        lsr   x1, x19, #ZEROETH_SHIFT   /* x1 := zeroeth slot */
+        cbz   x1, 1f
+        /* It is not in slot 0, remove the entry */
+        ldr   x0, =boot_pgtable         /* x0 := root table */
+        str   xzr, [x0, x1, lsl #3]
+        b     id_map_removed
+
+1:
+        /*
+         * Find the first slot used. Remove the entry for the first
+         * table if the slot is not 0. For slot 0, the ID map was done
+         * in the second table.
+         */
+        lsr   x1, x19, #FIRST_SHIFT
+        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
+        cbz   x1, 1f
+        /* It is not in slot 0, remove the entry */
+        ldr   x0, =boot_first           /* x0 := first table */
+        str   xzr, [x0, x1, lsl #3]
+        b     id_map_removed
+
+1:
+        /*
+         * Find the second slot used. Remove the entry for the first
+         * table if the slot is not 1 (runtime Xen mapping is 2M - 4M).
+         * For slot 1, it means the ID map was not created.
+         */
+        lsr   x1, x19, #SECOND_SHIFT
+        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
+        cmp   x1, #1
+        beq   id_map_removed
+        /* It is not in slot 1, remove the entry */
+        ldr   x0, =boot_second          /* x0 := second table */
+        str   xzr, [x0, x1, lsl #3]
+
+id_map_removed:
+        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        ret
+ENDPROC(remove_id_map)
+
 setup_fixmap:
-        /* Now we can install the fixmap and dtb mappings, since we
-         * don't need the 1:1 map any more */
-        dsb   sy
 #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
@@ -653,19 +718,10 @@  setup_fixmap:
         ldr   x1, =FIXMAP_ADDR(0)
         lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
-#endif
 
-        /*
-         * Flush the TLB in case the 1:1 mapping happens to clash with
-         * the virtual addresses used by the fixmap or DTB.
-         */
-        dsb   sy                     /* Ensure any page table updates made above
-                                      * have occurred. */
-
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
+        /* Ensure any page table updates made above have occurred */
+        dsb   nshst
+#endif
         ret
 ENDPROC(setup_fixmap)