diff mbox

[Xen-devel] xen/arm: Implement domain_get_maximum_gpfn

Message ID 1404226666-7949-1-git-send-email-julien.grall@linaro.org
State New
Headers show

Commit Message

Julien Grall July 1, 2014, 2:57 p.m. UTC
The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Stefano Stabellini July 1, 2014, 4:57 p.m. UTC | #1
On Tue, 1 Jul 2014, Julien Grall wrote:
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/mm.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 03a0533..2d40f07 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    return d->arch.p2m.max_mapped_gfn;
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,
> -- 
> 1.7.10.4
>
Julien Grall July 1, 2014, 6:36 p.m. UTC | #2
On 01/07/14 17:57, Stefano Stabellini wrote:
> On Tue, 1 Jul 2014, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Thanks. Just a quick follow-up on your question on IRC.

LPAE supports up to 48 bits on ARMv8 (40 bits on v7), so the MFN will 
just fit in 32 bits.

I'm a bit worry about what happen if there is an error? The current 
hypercall doesn't look like to be safe for that. Indeed, the return 
value is used to store the higher gpfn. If the guest also use internal 
error, then we are screw.

This is mostly an issue when the toolstack is running in 32 bits guest 
on 64 bits hypervisor. How x86 support this case?

Stefano was suggesting to introduce a new hypercall 
XENMEM_maximum_gpfn_v2 which will take a pointer to a gpfn in parameter.

Regards,
Andrew Cooper July 1, 2014, 6:53 p.m. UTC | #3
On 01/07/14 19:36, Julien Grall wrote:
>
>
> On 01/07/14 17:57, Stefano Stabellini wrote:
>> On Tue, 1 Jul 2014, Julien Grall wrote:
>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this
>>> purpose.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> Thanks. Just a quick follow-up on your question on IRC.
>
> LPAE supports up to 48 bits on ARMv8 (40 bits on v7), so the MFN will
> just fit in 32 bits.
>
> I'm a bit worry about what happen if there is an error? The current
> hypercall doesn't look like to be safe for that. Indeed, the return
> value is used to store the higher gpfn. If the guest also use internal
> error, then we are screw.

All hypercalls should return a long (domains' natural width) in their
first parameter, allowing for -ERANGE for truncation cases.  Not all
hypercalls actually adhere to this currently, but retroactively
enforcing this in the ABI should be fine, as it just only changes the
cases where we couldn't represent the result correctly anyway and
effectively returned junk.

>
> This is mostly an issue when the toolstack is running in 32 bits guest
> on 64 bits hypervisor. How x86 support this case?

The 32bit compat layer in Xen does quite a lot of truncation detection,
and will fail with -ERANGE.

There are other issues, such as libxc truncating the return value of
this specific hypercall from a long to an int (although that is rather
more simple to fix).

>
> Stefano was suggesting to introduce a new hypercall
> XENMEM_maximum_gpfn_v2 which will take a pointer to a gpfn in parameter.
>
> Regards,
>

I am not sure this is actually needed.  A toolstack of thinner width
than Xen almost certainly won't be capable of constructing such a
domain; there are many other similar issues in the Xen/toolstack abi/api
at the point at which hypercalls like this would start truncating.

~Andrew
Ian Campbell July 2, 2014, 9:12 a.m. UTC | #4
On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

What is using the result of this hypercall?

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/mm.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 03a0533..2d40f07 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    return d->arch.p2m.max_mapped_gfn;
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,
Julien Grall July 2, 2014, 9:19 a.m. UTC | #5
Hi Ian,

On 02/07/14 10:12, Ian Campbell wrote:
> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>
> What is using the result of this hypercall?

The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch 
GFN to initialize grant table.

IHMO this is buggy on ARM (and x86?), because we could have map 
everything up to the end of the address space (currently 40 bits).

I plan to rework a bit this code.

Without this patch, xc_dom_gnttab_hvm_seed will use by mistake the pfn 0 
(0xffffffff + 1).

Regards,
Ian Campbell July 2, 2014, 9:22 a.m. UTC | #6
On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
> Hi Ian,
> 
> On 02/07/14 10:12, Ian Campbell wrote:
> > On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> >> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >
> > What is using the result of this hypercall?
> 
> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch 
> GFN to initialize grant table.
> 
> IHMO this is buggy on ARM (and x86?), because we could have map 
> everything up to the end of the address space (currently 40 bits).

I wonder if we could find a way to not need this hypercall at all.

Any reason why both arm and x86 can't just use a fixed scratch pfn for
this temporary mapping? Both of them surely have spaces which they can
guarantee won't overlap with anything.

> I plan to rework a bit this code.
> 
> Without this patch, xc_dom_gnttab_hvm_seed will use by mistake the pfn 0 
> (0xffffffff + 1).
> 
> Regards,
> 
>
Julien Grall July 2, 2014, 9:37 a.m. UTC | #7
(Adding Roger)

On 02/07/14 10:22, Ian Campbell wrote:
> On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/07/14 10:12, Ian Campbell wrote:
>>> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
>>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>>
>>> What is using the result of this hypercall?
>>
>> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch
>> GFN to initialize grant table.
>>
>> IHMO this is buggy on ARM (and x86?), because we could have map
>> everything up to the end of the address space (currently 40 bits).
> 
> I wonder if we could find a way to not need this hypercall at all.
> 
> Any reason why both arm and x86 can't just use a fixed scratch pfn for
> this temporary mapping? Both of them surely have spaces which they can
> guarantee won't overlap with anything.

This was the previous behavior until last November.

commit db062c28f30eb68d1b5d7a910445a0ba1136179a
Date:   Wed Nov 13 09:26:13 2013 +0100

    libxc: move temporary grant table mapping to end of memory
    
    In order to set up the grant table for HVM guests, libxc needs to map
    the grant table temporarily.  At the moment, it does this by adding the
    grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
    then mapping that gfn, setting up the table, then unmapping the gfn and
    removing it from the p2m table.
    
    This breaks with PVH guests with 4G or more of ram, because there is
    no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
    leaving a "hole" when it removes the grant map from the p2m table.
    Since the guest thinks this is normal ram, when it maps it and tries
    to access the page, it crashes.
    
    This patch maps the page at max_gfn+1 instead.

I'm not sure what to do for x86, so I was planning to introduce a per-arch hook to retrieve a scratch gpfn.
x86 would keep the current behavior, and ARM will use the GNTTAB space in the layout.

Regards,
Ian Campbell July 2, 2014, 9:41 a.m. UTC | #8
On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
> (Adding Roger)
> 
> On 02/07/14 10:22, Ian Campbell wrote:
> > On Wed, 2014-07-02 at 10:19 +0100, Julien Grall wrote:
> >> Hi Ian,
> >>
> >> On 02/07/14 10:12, Ian Campbell wrote:
> >>> On Tue, 2014-07-01 at 15:57 +0100, Julien Grall wrote:
> >>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >>>
> >>> What is using the result of this hypercall?
> >>
> >> The result is at least used by xc_dom_gnttab_hvm_seed, to get a scratch
> >> GFN to initialize grant table.
> >>
> >> IHMO this is buggy on ARM (and x86?), because we could have map
> >> everything up to the end of the address space (currently 40 bits).
> > 
> > I wonder if we could find a way to not need this hypercall at all.
> > 
> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
> > this temporary mapping? Both of them surely have spaces which they can
> > guarantee won't overlap with anything.
> 
> This was the previous behavior until last November.
> 
> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
> Date:   Wed Nov 13 09:26:13 2013 +0100
> 
>     libxc: move temporary grant table mapping to end of memory
>     
>     In order to set up the grant table for HVM guests, libxc needs to map
>     the grant table temporarily.  At the moment, it does this by adding the
>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>     then mapping that gfn, setting up the table, then unmapping the gfn and
>     removing it from the p2m table.
>     
>     This breaks with PVH guests with 4G or more of ram, because there is
>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>     leaving a "hole" when it removes the grant map from the p2m table.
>     Since the guest thinks this is normal ram, when it maps it and tries
>     to access the page, it crashes.
>     
>     This patch maps the page at max_gfn+1 instead.
> 
> I'm not sure what to do for x86, so I was planning to introduce a per-arch hook to retrieve a scratch gpfn.
> x86 would keep the current behavior, and ARM will use the GNTTAB space in the layout.

Perhaps x86 could use some well known MMIO space, like the APIC at
0xfff????

(adding some more x86 folks)

Ian.
Jan Beulich July 2, 2014, 9:50 a.m. UTC | #9
>>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
>> On 02/07/14 10:22, Ian Campbell wrote:
>> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
>> > this temporary mapping? Both of them surely have spaces which they can
>> > guarantee won't overlap with anything.
>> 
>> This was the previous behavior until last November.
>> 
>> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
>> Date:   Wed Nov 13 09:26:13 2013 +0100
>> 
>>     libxc: move temporary grant table mapping to end of memory
>>     
>>     In order to set up the grant table for HVM guests, libxc needs to map
>>     the grant table temporarily.  At the moment, it does this by adding the
>>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>>     then mapping that gfn, setting up the table, then unmapping the gfn and
>>     removing it from the p2m table.
>>     
>>     This breaks with PVH guests with 4G or more of ram, because there is
>>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>>     leaving a "hole" when it removes the grant map from the p2m table.
>>     Since the guest thinks this is normal ram, when it maps it and tries
>>     to access the page, it crashes.
>>     
>>     This patch maps the page at max_gfn+1 instead.
>> 
>> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
> hook to retrieve a scratch gpfn.
>> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
> the layout.
> 
> Perhaps x86 could use some well known MMIO space, like the APIC at
> 0xfff????

Except that PVH has no LAPIC right now. Yet with the recent hole
punching patches I wonder whether "there is no MMIO hole" is actually
correct. Roger?

Jan
Ian Campbell July 2, 2014, 9:52 a.m. UTC | #10
On Wed, 2014-07-02 at 10:50 +0100, Jan Beulich wrote:
> >>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
> >> On 02/07/14 10:22, Ian Campbell wrote:
> >> > Any reason why both arm and x86 can't just use a fixed scratch pfn for
> >> > this temporary mapping? Both of them surely have spaces which they can
> >> > guarantee won't overlap with anything.
> >> 
> >> This was the previous behavior until last November.
> >> 
> >> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
> >> Date:   Wed Nov 13 09:26:13 2013 +0100
> >> 
> >>     libxc: move temporary grant table mapping to end of memory
> >>     
> >>     In order to set up the grant table for HVM guests, libxc needs to map
> >>     the grant table temporarily.  At the moment, it does this by adding the
> >>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
> >>     then mapping that gfn, setting up the table, then unmapping the gfn and
> >>     removing it from the p2m table.
> >>     
> >>     This breaks with PVH guests with 4G or more of ram, because there is
> >>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
> >>     leaving a "hole" when it removes the grant map from the p2m table.
> >>     Since the guest thinks this is normal ram, when it maps it and tries
> >>     to access the page, it crashes.
> >>     
> >>     This patch maps the page at max_gfn+1 instead.
> >> 
> >> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
> > hook to retrieve a scratch gpfn.
> >> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
> > the layout.
> > 
> > Perhaps x86 could use some well known MMIO space, like the APIC at
> > 0xfff????
> 
> Except that PVH has no LAPIC right now. Yet with the recent hole
> punching patches I wonder whether "there is no MMIO hole" is actually
> correct. Roger?

Another option might be to just burn a special pfn on a scratch mapping.

Ian.
Roger Pau Monné July 2, 2014, 10:19 a.m. UTC | #11
On 02/07/14 11:50, Jan Beulich wrote:
>>>> On 02.07.14 at 11:41, <Ian.Campbell@citrix.com> wrote:
>> On Wed, 2014-07-02 at 10:37 +0100, Julien Grall wrote:
>>> On 02/07/14 10:22, Ian Campbell wrote:
>>>> Any reason why both arm and x86 can't just use a fixed scratch pfn for
>>>> this temporary mapping? Both of them surely have spaces which they can
>>>> guarantee won't overlap with anything.
>>>
>>> This was the previous behavior until last November.
>>>
>>> commit db062c28f30eb68d1b5d7a910445a0ba1136179a
>>> Date:   Wed Nov 13 09:26:13 2013 +0100
>>>
>>>     libxc: move temporary grant table mapping to end of memory
>>>     
>>>     In order to set up the grant table for HVM guests, libxc needs to map
>>>     the grant table temporarily.  At the moment, it does this by adding the
>>>     grant page to the HVM guest's p2m table in the MMIO hole (at gfn 0xFFFFE),
>>>     then mapping that gfn, setting up the table, then unmapping the gfn and
>>>     removing it from the p2m table.
>>>     
>>>     This breaks with PVH guests with 4G or more of ram, because there is
>>>     no MMIO hole; so it ends up clobbering a valid RAM p2m entry, then
>>>     leaving a "hole" when it removes the grant map from the p2m table.
>>>     Since the guest thinks this is normal ram, when it maps it and tries
>>>     to access the page, it crashes.
>>>     
>>>     This patch maps the page at max_gfn+1 instead.
>>>
>>> I'm not sure what to do for x86, so I was planning to introduce a per-arch 
>> hook to retrieve a scratch gpfn.
>>> x86 would keep the current behavior, and ARM will use the GNTTAB space in 
>> the layout.
>>
>> Perhaps x86 could use some well known MMIO space, like the APIC at
>> 0xfff????
> 
> Except that PVH has no LAPIC right now. Yet with the recent hole
> punching patches I wonder whether "there is no MMIO hole" is actually
> correct. Roger?

For PVH guests there's still no MMIO hole (or any other kind of hole) at
all, the hole(s) is only there for Dom0.

Roger.
Jan Beulich July 2, 2014, 10:31 a.m. UTC | #12
>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> all, the hole(s) is only there for Dom0.

So where would passed through devices get their MMIO BARs located?
(I realize pass-through isn't supported yet for PVH, but I didn't expect
such fundamental things to be missing.)

Jan
Roger Pau Monné July 2, 2014, 10:51 a.m. UTC | #13
On 02/07/14 12:31, Jan Beulich wrote:
>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
>> all, the hole(s) is only there for Dom0.
> 
> So where would passed through devices get their MMIO BARs located?
> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> such fundamental things to be missing.)

We could always add a MMIO region to a PVH guest in backwards compatible
way, the only requirement is to make sure the e820 provided to the guest
has this hole set up, but I see no reason to add it before having this
functionality, or to add it unconditionally to guests even if no devices
are passed through.

Also, shouldn't PVH guests use pcifront/pciback, which means it won't
have any BARs mapped directly?

Roger.
Ian Campbell July 2, 2014, 10:52 a.m. UTC | #14
On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
> On 02/07/14 12:31, Jan Beulich wrote:
> >>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> >> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> >> all, the hole(s) is only there for Dom0.
> > 
> > So where would passed through devices get their MMIO BARs located?
> > (I realize pass-through isn't supported yet for PVH, but I didn't expect
> > such fundamental things to be missing.)
> 
> We could always add a MMIO region to a PVH guest in backwards compatible
> way, the only requirement is to make sure the e820 provided to the guest
> has this hole set up, but I see no reason to add it before having this
> functionality, or to add it unconditionally to guests even if no devices
> are passed through.
> 
> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
> have any BARs mapped directly?

They need to map them somewhere in their physical address to be able to
use them... (Unlike a PV guest which I think maps them in the virtual
address space "by magic" avoiding the need for a p2m entry).

Ian.
Andrew Cooper July 2, 2014, 10:58 a.m. UTC | #15
On 02/07/14 11:52, Ian Campbell wrote:
> On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
>> On 02/07/14 12:31, Jan Beulich wrote:
>>>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
>>>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
>>>> all, the hole(s) is only there for Dom0.
>>> So where would passed through devices get their MMIO BARs located?
>>> (I realize pass-through isn't supported yet for PVH, but I didn't expect
>>> such fundamental things to be missing.)
>> We could always add a MMIO region to a PVH guest in backwards compatible
>> way, the only requirement is to make sure the e820 provided to the guest
>> has this hole set up, but I see no reason to add it before having this
>> functionality, or to add it unconditionally to guests even if no devices
>> are passed through.
>>
>> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
>> have any BARs mapped directly?
> They need to map them somewhere in their physical address to be able to
> use them... (Unlike a PV guest which I think maps them in the virtual
> address space "by magic" avoiding the need for a p2m entry).
>
> Ian.

With respect to the original problem of accidentally punching a hole in
the guest

Why cant libxc clean up after itself?  From my understanding, it is a
simple increase reservation to fill the hole it 'borrowed' during setup.

This avoids MMIO ranges in pure PVH guests (arm included).

~Andrew
Ian Campbell July 2, 2014, 11:21 a.m. UTC | #16
On Wed, 2014-07-02 at 11:58 +0100, Andrew Cooper wrote:
> On 02/07/14 11:52, Ian Campbell wrote:
> > On Wed, 2014-07-02 at 12:51 +0200, Roger Pau Monné wrote:
> >> On 02/07/14 12:31, Jan Beulich wrote:
> >>>>>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> >>>> For PVH guests there's still no MMIO hole (or any other kind of hole) at
> >>>> all, the hole(s) is only there for Dom0.
> >>> So where would passed through devices get their MMIO BARs located?
> >>> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> >>> such fundamental things to be missing.)
> >> We could always add a MMIO region to a PVH guest in backwards compatible
> >> way, the only requirement is to make sure the e820 provided to the guest
> >> has this hole set up, but I see no reason to add it before having this
> >> functionality, or to add it unconditionally to guests even if no devices
> >> are passed through.
> >>
> >> Also, shouldn't PVH guests use pcifront/pciback, which means it won't
> >> have any BARs mapped directly?
> > They need to map them somewhere in their physical address to be able to
> > use them... (Unlike a PV guest which I think maps them in the virtual
> > address space "by magic" avoiding the need for a p2m entry).
> >
> > Ian.
> 
> With respect to the original problem of accidentally punching a hole in
> the guest
> 
> Why cant libxc clean up after itself?  From my understanding, it is a
> simple increase reservation to fill the hole it 'borrowed' during setup.

I think the issue is that in the functions in question it doesn't know
if there was RAM there to start with (since it's based on guest RAM
allocation and layout etc), so it doesn't know if it needs refill the
hole or not.

Not insurmountable though I suspect.

> 
> This avoids MMIO ranges in pure PVH guests (arm included).
> 
> ~Andrew
Konrad Rzeszutek Wilk July 2, 2014, 1:44 p.m. UTC | #17
On Wed, Jul 02, 2014 at 11:31:05AM +0100, Jan Beulich wrote:
> >>> On 02.07.14 at 12:19, <roger.pau@citrix.com> wrote:
> > For PVH guests there's still no MMIO hole (or any other kind of hole) at
> > all, the hole(s) is only there for Dom0.
> 
> So where would passed through devices get their MMIO BARs located?

If it ends up uisng 'e820_host' the E820 that the guest has
ends up looking like the hosts. At that point the memory
map looks quite similar to the dom0 one - and the MMIO
BARs should fit in nicely.

> (I realize pass-through isn't supported yet for PVH, but I didn't expect
> such fundamental things to be missing.)
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
Julien Grall July 9, 2014, 11:38 a.m. UTC | #18
Hi Ian,

On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
> On Tue, 1 Jul 2014, Julien Grall wrote:
>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Following the discussion on this thread, I will send a patch to fix
xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
may keep the same behavior.

I think this patch is still relevant and won't break things as we don't
have platform with 48bits PA support.

Ian, do I have your ack on this patch?

>>  xen/arch/arm/mm.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 03a0533..2d40f07 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -947,7 +947,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>>  
>>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>>  {
>> -    return -ENOSYS;
>> +    return d->arch.p2m.max_mapped_gfn;
>>  }
>>  
>>  void share_xen_page_with_guest(struct page_info *page,
>> -- 
>> 1.7.10.4
>>

Regards,
Ian Campbell July 16, 2014, 4:02 p.m. UTC | #19
On Wed, 2014-07-09 at 12:38 +0100, Julien Grall wrote:
> Hi Ian,
> 
> On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
> > On Tue, 1 Jul 2014, Julien Grall wrote:
> >> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
> >> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Following the discussion on this thread, I will send a patch to fix
> xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
> may keep the same behavior.
> 
> I think this patch is still relevant and won't break things as we don't
> have platform with 48bits PA support.
> 
> Ian, do I have your ack on this patch?

Oops, when I first saw this I read the first para and stopped, sorry.

I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
and continue to punt on this interface until it is actually needed by
something unavoidable on the guest side (and simultaneously hope that
day never comes...).

Ian.
Julien Grall July 16, 2014, 6:17 p.m. UTC | #20
Hi Ian,

On 16/07/14 17:02, Ian Campbell wrote:
> On Wed, 2014-07-09 at 12:38 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 07/01/2014 05:57 PM, Stefano Stabellini wrote:
>>> On Tue, 1 Jul 2014, Julien Grall wrote:
>>>> The function domain_get_maximum_gpfn is returning the maximum gpfn ever
>>>> mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>
>>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>
>> Following the discussion on this thread, I will send a patch to fix
>> xc_dom_gnttab_hvm_seed at least for ARM. Not sure what to do on x86, I
>> may keep the same behavior.
>>
>> I think this patch is still relevant and won't break things as we don't
>> have platform with 48bits PA support.
>>
>> Ian, do I have your ack on this patch?
>
> Oops, when I first saw this I read the first para and stopped, sorry.
>
> I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> and continue to punt on this interface until it is actually needed by
> something unavoidable on the guest side (and simultaneously hope that
> day never comes...).

This hypercall is used in 2 difference locations: domain save, and dump 
core.

I expect to have both working on Xen ARM soon. I will also send a fix 
when I will find time for xc_dom_gnttab_hvm_seed.

Regards,
Julien Grall Sept. 1, 2014, 9:32 p.m. UTC | #21
Hi Ian,

On 16/07/14 12:02, Ian Campbell wrote:
> I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> and continue to punt on this interface until it is actually needed by
> something unavoidable on the guest side (and simultaneously hope that
> day never comes...).

This patch is a requirement to make Xen Memory access working on ARM. 
Could you reconsider the possibility to apply this patch on Xen?

For the main concern of this thread (i.e the buggy scratch pfn with 
xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided 
which scrach pfn should be used. I will send the patch next week.

Regards,
Ian Campbell Sept. 3, 2014, 8:44 a.m. UTC | #22
On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
> Hi Ian,
> 
> On 16/07/14 12:02, Ian Campbell wrote:
> > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> > and continue to punt on this interface until it is actually needed by
> > something unavoidable on the guest side (and simultaneously hope that
> > day never comes...).
> 
> This patch is a requirement to make Xen Memory access working on ARM. 
> Could you reconsider the possibility to apply this patch on Xen?

Needs more rationale as to why it is required for Xen Memory (do you
mean xenaccess?). I assume I'll find that in the relevant thread once I
get to it?

> For the main concern of this thread (i.e the buggy scratch pfn with 
> xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided 
> which scrach pfn should be used. I will send the patch next week.

OK.
Tamas K Lengyel Sept. 3, 2014, 9 a.m. UTC | #23
On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
> > Hi Ian,
> >
> > On 16/07/14 12:02, Ian Campbell wrote:
> > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed for ARM
> > > and continue to punt on this interface until it is actually needed by
> > > something unavoidable on the guest side (and simultaneously hope that
> > > day never comes...).
> >
> > This patch is a requirement to make Xen Memory access working on ARM.
> > Could you reconsider the possibility to apply this patch on Xen?
>
> Needs more rationale as to why it is required for Xen Memory (do you
> mean xenaccess?). I assume I'll find that in the relevant thread once I
> get to it?
>
>
It's used in a non-critical sanity check for performance reasons, as seen
here:
https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
Without the sanity check we might attempt to set mem_access permissions on
gpfn's that don't exist for the guest. It wouldn't break anything to do
that but if we know beforehand that the gpfn is outside the scope of what
the guest has we can skip the entire thing.


> > For the main concern of this thread (i.e the buggy scratch pfn with
> > xc_dom_gnttab_hvm_seed), I wrote a patch to let the architecture decided
> > which scrach pfn should be used. I will send the patch next week.
>
> OK.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
Julien Grall Sept. 8, 2014, 8:43 p.m. UTC | #24
Hello Tamas,

On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>
>
> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
> <mailto:Ian.Campbell@citrix.com>> wrote:
>
>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>      > Hi Ian,
>      >
>      > On 16/07/14 12:02, Ian Campbell wrote:
>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>     for ARM
>      > > and continue to punt on this interface until it is actually
>     needed by
>      > > something unavoidable on the guest side (and simultaneously
>     hope that
>      > > day never comes...).
>      >
>      > This patch is a requirement to make Xen Memory access working on ARM.
>      > Could you reconsider the possibility to apply this patch on Xen?
>
>     Needs more rationale as to why it is required for Xen Memory (do you
>     mean xenaccess?). I assume I'll find that in the relevant thread once I
>     get to it?
>
>
> It's used in a non-critical sanity check for performance reasons, as
> seen here:
> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
> Without the sanity check we might attempt to set mem_access permissions
> on gpfn's that don't exist for the guest. It wouldn't break anything to
> do that but if we know beforehand that the gpfn is outside the scope of
> what the guest has we can skip the entire thing.

It might be better if you carry this patch on your series.

Regards,
Tamas K Lengyel Sept. 8, 2014, 8:47 p.m. UTC | #25
On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
wrote:

> Hello Tamas,
>
> On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>>
>>
>>
>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>
>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>      > Hi Ian,
>>      >
>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>     for ARM
>>      > > and continue to punt on this interface until it is actually
>>     needed by
>>      > > something unavoidable on the guest side (and simultaneously
>>     hope that
>>      > > day never comes...).
>>      >
>>      > This patch is a requirement to make Xen Memory access working on
>> ARM.
>>      > Could you reconsider the possibility to apply this patch on Xen?
>>
>>     Needs more rationale as to why it is required for Xen Memory (do you
>>     mean xenaccess?). I assume I'll find that in the relevant thread once
>> I
>>     get to it?
>>
>>
>> It's used in a non-critical sanity check for performance reasons, as
>> seen here:
>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>> common/mem_access.c#L75.
>> Without the sanity check we might attempt to set mem_access permissions
>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>> do that but if we know beforehand that the gpfn is outside the scope of
>> what the guest has we can skip the entire thing.
>>
>
> It might be better if you carry this patch on your series.
>
> Regards,
>
> --
> Julien Grall
>

Alright.

Thanks,
Tamas
Tamas K Lengyel Sept. 9, 2014, 12:50 p.m. UTC | #26
On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

>
> On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
> wrote:
>
>> Hello Tamas,
>>
>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>
>>>
>>>
>>>
>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>>
>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>      > Hi Ian,
>>>      >
>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>     for ARM
>>>      > > and continue to punt on this interface until it is actually
>>>     needed by
>>>      > > something unavoidable on the guest side (and simultaneously
>>>     hope that
>>>      > > day never comes...).
>>>      >
>>>      > This patch is a requirement to make Xen Memory access working on
>>> ARM.
>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>
>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>> once I
>>>     get to it?
>>>
>>>
>>> It's used in a non-critical sanity check for performance reasons, as
>>> seen here:
>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>>> common/mem_access.c#L75.
>>> Without the sanity check we might attempt to set mem_access permissions
>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>> do that but if we know beforehand that the gpfn is outside the scope of
>>> what the guest has we can skip the entire thing.
>>>
>>
>> It might be better if you carry this patch on your series.
>>
>> Regards,
>>
>> --
>> Julien Grall
>>
>
> Alright.
>
> Thanks,
> Tamas
>

As a sidenote, if this patch is problematic to merge for some reason, the
current implementation still needs to change to return 0 instead of -ENOSYS
as to conform to the x86 implementation. On the x86 side 0 is used to
indicate failure. See 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86: make
certain memory sub-ops return valid values" for more info.

Tamas
Andrew Cooper Sept. 9, 2014, 1:09 p.m. UTC | #27
On 09/09/14 13:50, Tamas K Lengyel wrote:
>
>
> On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com <mailto:tamas.lengyel@zentific.com>> wrote:
>
>
>     On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall
>     <julien.grall@linaro.org <mailto:julien.grall@linaro.org>> wrote:
>
>         Hello Tamas,
>
>         On 03/09/14 02:00, Tamas K Lengyel wrote:
>
>
>
>
>             On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell
>             <Ian.Campbell@citrix.com <mailto:Ian.Campbell@citrix.com>
>             <mailto:Ian.Campbell@citrix.com
>             <mailto:Ian.Campbell@citrix.com>>> wrote:
>
>                 On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>                  > Hi Ian,
>                  >
>                  > On 16/07/14 12:02, Ian Campbell wrote:
>                  > > I'd much prefer to just have the fix to
>             xc_dom_gnttab_hvm_seed
>                 for ARM
>                  > > and continue to punt on this interface until it
>             is actually
>                 needed by
>                  > > something unavoidable on the guest side (and
>             simultaneously
>                 hope that
>                  > > day never comes...).
>                  >
>                  > This patch is a requirement to make Xen Memory
>             access working on ARM.
>                  > Could you reconsider the possibility to apply this
>             patch on Xen?
>
>                 Needs more rationale as to why it is required for Xen
>             Memory (do you
>                 mean xenaccess?). I assume I'll find that in the
>             relevant thread once I
>                 get to it?
>
>
>             It's used in a non-critical sanity check for performance
>             reasons, as
>             seen here:
>             https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75.
>             Without the sanity check we might attempt to set
>             mem_access permissions
>             on gpfn's that don't exist for the guest. It wouldn't
>             break anything to
>             do that but if we know beforehand that the gpfn is outside
>             the scope of
>             what the guest has we can skip the entire thing.
>
>
>         It might be better if you carry this patch on your series.
>
>         Regards,
>
>         -- 
>         Julien Grall
>
>
>     Alright.
>
>     Thanks,
>     Tamas
>
>
> As a sidenote, if this patch is problematic to merge for some reason,
> the current implementation still needs to change to return 0 instead
> of -ENOSYS as to conform to the x86 implementation. On the x86 side 0
> is used to indicate failure. See
> 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86: make certain memory
> sub-ops return valid values" for more info.

0 does not indicate failure.  It indicates the value held in the shared
info field, which if 0, indicates that the domain is still being built
by the toolstack.

-ENOSYS is still a valid failure case.

~Andrew
Tamas K Lengyel Sept. 9, 2014, 2:01 p.m. UTC | #28
On Tue, Sep 9, 2014 at 3:09 PM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

>  On 09/09/14 13:50, Tamas K Lengyel wrote:
>
>
>
> On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <
> tamas.lengyel@zentific.com> wrote:
>
>>
>>   On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
>> wrote:
>>
>>> Hello Tamas,
>>>
>>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>>>  <mailto:Ian.Campbell@citrix.com>> wrote:
>>>>
>>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>>      > Hi Ian,
>>>>      >
>>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>>     for ARM
>>>>      > > and continue to punt on this interface until it is actually
>>>>     needed by
>>>>      > > something unavoidable on the guest side (and simultaneously
>>>>     hope that
>>>>      > > day never comes...).
>>>>      >
>>>>      > This patch is a requirement to make Xen Memory access working on
>>>> ARM.
>>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>>
>>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>>> once I
>>>>     get to it?
>>>>
>>>>
>>>> It's used in a non-critical sanity check for performance reasons, as
>>>> seen here:
>>>>
>>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/common/mem_access.c#L75
>>>> .
>>>> Without the sanity check we might attempt to set mem_access permissions
>>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>>> do that but if we know beforehand that the gpfn is outside the scope of
>>>> what the guest has we can skip the entire thing.
>>>>
>>>
>>> It might be better if you carry this patch on your series.
>>>
>>> Regards,
>>>
>>> --
>>> Julien Grall
>>>
>>
>>   Alright.
>>
>>  Thanks,
>>  Tamas
>>
>
>  As a sidenote, if this patch is problematic to merge for some reason,
> the current implementation still needs to change to return 0 instead of
> -ENOSYS as to conform to the x86 implementation. On the x86 side 0 is used
> to indicate failure. See 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe "x86:
> make certain memory sub-ops return valid values" for more info.
>
>
> 0 does not indicate failure.  It indicates the value held in the shared
> info field, which if 0, indicates that the domain is still being built by
> the toolstack.
>
> -ENOSYS is still a valid failure case.
>
> ~Andrew
>

Hm, in that case ignore my comment.

Thanks,
Tamas
Tamas K Lengyel Sept. 10, 2014, 11:21 a.m. UTC | #29
On Mon, Sep 8, 2014 at 10:47 PM, Tamas K Lengyel <tamas.lengyel@zentific.com
> wrote:

>
> On Mon, Sep 8, 2014 at 10:43 PM, Julien Grall <julien.grall@linaro.org>
> wrote:
>
>> Hello Tamas,
>>
>> On 03/09/14 02:00, Tamas K Lengyel wrote:
>>
>>>
>>>
>>>
>>> On Wed, Sep 3, 2014 at 10:44 AM, Ian Campbell <Ian.Campbell@citrix.com
>>> <mailto:Ian.Campbell@citrix.com>> wrote:
>>>
>>>     On Mon, 2014-09-01 at 17:32 -0400, Julien Grall wrote:
>>>      > Hi Ian,
>>>      >
>>>      > On 16/07/14 12:02, Ian Campbell wrote:
>>>      > > I'd much prefer to just have the fix to xc_dom_gnttab_hvm_seed
>>>     for ARM
>>>      > > and continue to punt on this interface until it is actually
>>>     needed by
>>>      > > something unavoidable on the guest side (and simultaneously
>>>     hope that
>>>      > > day never comes...).
>>>      >
>>>      > This patch is a requirement to make Xen Memory access working on
>>> ARM.
>>>      > Could you reconsider the possibility to apply this patch on Xen?
>>>
>>>     Needs more rationale as to why it is required for Xen Memory (do you
>>>     mean xenaccess?). I assume I'll find that in the relevant thread
>>> once I
>>>     get to it?
>>>
>>>
>>> It's used in a non-critical sanity check for performance reasons, as
>>> seen here:
>>> https://github.com/tklengyel/xen/blob/arm_memaccess3/xen/
>>> common/mem_access.c#L75.
>>> Without the sanity check we might attempt to set mem_access permissions
>>> on gpfn's that don't exist for the guest. It wouldn't break anything to
>>> do that but if we know beforehand that the gpfn is outside the scope of
>>> what the guest has we can skip the entire thing.
>>>
>>
>> It might be better if you carry this patch on your series.
>>
>> Regards,
>>
>> --
>> Julien Grall
>>
>
As I have been looking at this more and more I'm coming to the conclusion
that actually replacing the check in mem_access would be more sensible. The
reason being that the user calling this code normally does it based on
information gathered via getdomaininfo, and max_pages reported back to the
user is higher then this implementation of domain_get_maximum_gpfn reports
(on ARM at least). I switched xen-access to use tot_pages instead of
max_pages, but tot_pages have been less then domain_get_maximum_gpfn
reports. I'm not entirely sure why there is such a discrepancy, but
enforcing a value that the user will ultimately 'guess' does not make much
sense.

Tamas
diff mbox

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 03a0533..2d40f07 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -947,7 +947,7 @@  int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,