diff mbox

[Xen-devel] xen/malloc: handle correctly page allocation when align > size

Message ID 1394174763-6992-1-git-send-email-julien.grall@linaro.org
State Superseded, archived
Headers show

Commit Message

Julien Grall March 7, 2014, 6:46 a.m. UTC
When align is superior to size, we need to retrieve the order from
align during multiple page allocation. I guess it was the goal of the commit
fb034f42 "xmalloc: make close-to-PAGE_SIZE allocations more efficient".

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/xmalloc_tlsf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Jan Beulich March 7, 2014, 10:06 a.m. UTC | #1
>>> On 07.03.14 at 07:46, Julien Grall <julien.grall@linaro.org> wrote:
> When align is superior to size, we need to retrieve the order from
> align during multiple page allocation. I guess it was the goal of the commit
> fb034f42 "xmalloc: make close-to-PAGE_SIZE allocations more efficient".

Oh, yes, of course it was. Without that the call is pointless.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Albeit in fact the better change might be to simply use
max(size, align) in the initializer of order - no idea why I didn't do
it that way.

Out of curiosity: I never expected this code path to be actually
taken - what is it that you need this to work correctly?

Jan

> --- a/xen/common/xmalloc_tlsf.c
> +++ b/xen/common/xmalloc_tlsf.c
> @@ -531,7 +531,7 @@ static void *xmalloc_whole_pages(unsigned long size, unsigned long align)
>      void *res, *p;
>  
>      if ( align > size )
> -        get_order_from_bytes(align);
> +        order = get_order_from_bytes(align);
>  
>      res = alloc_xenheap_pages(order, 0);
>      if ( res == NULL )
Julien Grall March 9, 2014, 3:25 p.m. UTC | #2
On 07/03/14 10:06, Jan Beulich wrote:
>>>> On 07.03.14 at 07:46, Julien Grall <julien.grall@linaro.org> wrote:
>> When align is superior to size, we need to retrieve the order from
>> align during multiple page allocation. I guess it was the goal of the commit
>> fb034f42 "xmalloc: make close-to-PAGE_SIZE allocations more efficient".
>
> Oh, yes, of course it was. Without that the call is pointless.
>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> Albeit in fact the better change might be to simply use
> max(size, align) in the initializer of order - no idea why I didn't do
> it that way.

I can send a new version of this page to use max.

> Out of curiosity: I never expected this code path to be actually
> taken - what is it that you need this to work correctly?

I was looking to the code to decide if the best solution is to use 
xmalloc or alloc_xenheap_pages directly.
diff mbox

Patch

diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c
index d3bdfa7..c957119 100644
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -531,7 +531,7 @@  static void *xmalloc_whole_pages(unsigned long size, unsigned long align)
     void *res, *p;
 
     if ( align > size )
-        get_order_from_bytes(align);
+        order = get_order_from_bytes(align);
 
     res = alloc_xenheap_pages(order, 0);
     if ( res == NULL )