diff mbox

malloc: Implement heap protector

Message ID dd818b8d-7176-8490-0780-e6e25756323a@redhat.com
State Superseded
Headers show

Commit Message

Florian Weimer Oct. 28, 2016, 1:17 p.m. UTC
This is a fully working version of a heap protector, which uses XOR 
cookies to obfuscate the heap metadata.  The hope is that this makes 
exploitation of simple heap overflows more difficult because the 
attackers have to obtain the heap guard values first before they can 
create a malloc chunk that is recognized by the malloc implementation.

I verified that existing Emacs binaries which contain a dumped heap will 
still work after this change.

I still need to redo the performance analysis.  An older version of the 
code had these results for one of DJ's workload files:

         Welch Two Sample t-test

data:  old_malloc and new_malloc
t = -5.0042, df = 157.65, p-value = 1.484e-06
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
  -4.267772 -1.852228
sample estimates:
mean of x mean of y
    131.07    134.13


         Welch Two Sample t-test

data:  old_calloc and new_calloc
t = -0.90822, df = 197.05, p-value = 0.3649
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
  -8.435823  3.115823
sample estimates:
mean of x mean of y
    206.83    209.49


         Welch Two Sample t-test

data:  old_realloc and new_realloc
t = -4.7164, df = 122.86, p-value = 6.406e-06
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
  -4.202311 -1.717689
sample estimates:
mean of x mean of y
    139.70    142.66


         Welch Two Sample t-test

data:  old_free and new_free
t = -3.0362, df = 105.61, p-value = 0.003018
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
  -1.4546563 -0.3053437
sample estimates:
mean of x mean of y
     96.47     97.35


So 3 cycles for malloc, realloc and probably calloc, and one cycle for 
free.  I still hope to recover some of the performance loss with 
micro-optimizations, but I'd like to get the patch committed ASAP to 
increase testing time before the release.

Florian

Comments

Carlos O'Donell Oct. 28, 2016, 3:05 p.m. UTC | #1
Florian,

Great work, and I'd say this is ready to go in, but we're missing
source level documentation of the feature.

(1) Detailed source-level comment about the feature.

I'd like to see you write a lengthy and detailed comment at the
start of malloc-guard.c which explains what is going on and why
we're doing this. Any future developer should open malloc-guard.c
and be able to quickly get an overview of the new feature.

(2) NEWS entry.

This is a new security feature that should get documented in NEWS.
Please write up a paragraph about it for NEWS.

OK for me, but I'd like at least one other developer to review
this patch before we consider it OK to checkin.

> I still need to redo the performance analysis. An older version of

> the code had these results for one of DJ's workload files:


I'm not going to ask that anything go into the microbenchmark because
I know only the simulator we have gives cycle counts that you can use
to get mean performance differences in cycles. The dj/malloc branch 
has the simulator and it can be used to validate your changes for any
workload a developer captures and runs against your changes.

I'm happy to see us using the simulator and workloads to get reproducible
results about performance questions for more complex APIs.

Which workload was this? The LibreOffice large spreadsheet calc test?

[snip statistics]

Also very happy to see the use of stastics to show significance of
the changes.

> So 3 cycles for malloc, realloc and probably calloc, and one cycle

> for free. I still hope to recover some of the performance loss with

> micro-optimizations, but I'd like to get the patch committed ASAP to

> increase testing time before the release.


Even 3 cycles is in the noise given today's malloc.

Only with DJ's thread-local cache will it start to make a difference,
when the fast path is dropping down to 10 cycles without a lock.

Therefore I think 3 cycles is acceptable today. It won't be once
DJ's patches get into master. And then we'll be looking at every
micro-optimization we can take on the fast path to provide the best
performance.
 
> 2016-10-27  Florian Weimer  <fweimer@redhat.com>

> 

> 	* malloc/mallc-internal.h (__malloc_header_guard)

> 	(__malloc_footer_guard): Declare.

> 	* malloc/malloc-guard.c: New file.

> 	* malloc/Makefile (routines): Add it.

> 	* malloc/malloc.c (HEAP_CRYPT_SIZE, HEAP_CRYPT_PREVSIZE): Define.

> 	(chunksize_nomask, prev_size, set_prev_size, set_head_size)

> 	(set_head, set_foot): Add encryption.

> 	* malloc/arena.c (ptmalloc_init): For shared builds, initialize

> 	the heap guard variables.  Initialize the top chunk.

> 	* malloc/hooks.c (malloc_set_state): Apply the heap guard to the

> 	dumped heap.

> 	* malloc/tst-mallocstate.c (malloc_usable_size_valid): New

> 	variable.

> 	(check_allocation): Check malloc_usable_size result if

> 	malloc_usable_size_valid.

> 	(init_heap): Set malloc_usable_size_valid.

> 	* csu/libc-start.c (LIBC_START_MAIN): Initialize heap guard

> 	variables.

> 	* sysdeps/generic/ldsodefs.h (struct rtld_global_ro): Add members

> 	_dl_malloc_header_guard, _dl_malloc_footer_guard.

> 	* elf/rtld.c (security_init): Initialize temporary copy of the

> 	heap guard variables.

> 

> diff --git a/csu/libc-start.c b/csu/libc-start.c

> index 333a4cc..ae6abde 100644

> --- a/csu/libc-start.c

> +++ b/csu/libc-start.c

> @@ -22,6 +22,7 @@

>  #include <ldsodefs.h>

>  #include <exit-thread.h>

>  #include <elf/dl-keysetup.h>

> +#include <malloc/malloc-internal.h>


OK

>  

>  extern void __libc_init_first (int argc, char **argv, char **envp);

>  

> @@ -210,6 +211,11 @@ LIBC_START_MAIN (int (*main) (int, char **, char ** MAIN_AUXVEC_DECL),

>    __pointer_chk_guard_local = keys.pointer;

>  # endif

>  

> +  /* In the non-shared case, we initialize the heap guard

> +     directly.  */

> +  __malloc_header_guard = keys.heap_header;

> +  __malloc_footer_guard = keys.heap_footer;


OK.

> +

>  #endif

>  

>    /* Register the destructor of the dynamic linker if there is any.  */

> diff --git a/elf/rtld.c b/elf/rtld.c

> index de965da..7ea06e4 100644

> --- a/elf/rtld.c

> +++ b/elf/rtld.c

> @@ -42,6 +42,7 @@

>  #include <stap-probe.h>

>  #include <stackinfo.h>

>  #include <dl-keysetup.h>

> +#include <malloc/malloc-internal.h>


OK.

>  

>  #include <assert.h>

>  

> @@ -716,6 +717,11 @@ security_init (void)

>  #endif

>    __pointer_chk_guard_local = keys.pointer;

>  

> +  /* Keep a copy of the computed keys, so that they can be obtained

> +     during malloc initialization in libc.so.  */

> +  GLRO (dl_malloc_header_guard) = keys.heap_header;

> +  GLRO (dl_malloc_footer_guard) = keys.heap_footer;


OK.

> +

>    /* We do not need the _dl_random value anymore.  The less

>       information we leave behind, the better, so clear the

>       variable.  */

> diff --git a/malloc/Makefile b/malloc/Makefile

> index b8efcd6..cd289f8 100644

> --- a/malloc/Makefile

> +++ b/malloc/Makefile

> @@ -41,7 +41,7 @@ tests-static := \

>  tests += $(tests-static)

>  test-srcs = tst-mtrace

>  

> -routines = malloc morecore mcheck mtrace obstack \

> +routines = malloc morecore mcheck mtrace obstack malloc-guard \


OK.

>    scratch_buffer_grow scratch_buffer_grow_preserve \

>    scratch_buffer_set_array_size

>  

> diff --git a/malloc/arena.c b/malloc/arena.c

> index f85b0af..792451a 100644

> --- a/malloc/arena.c

> +++ b/malloc/arena.c

> @@ -340,6 +340,19 @@ ptmalloc_init (void)

>        if (check_action != 0)

>          __malloc_check_init ();

>      }

> +

> +#ifdef SHARED

> +  /* For a shared library, elf/rtld.c performed key setup in

> +     security_init, and we copy the keys.  In static builds, the guard

> +     cookies have already been initialized in csu/libc-start.c.  */

> +  __malloc_header_guard = GLRO (dl_malloc_header_guard);

> +  __malloc_footer_guard = GLRO (dl_malloc_footer_guard);


OK.

> +#endif

> +

> +  /* Initialize the top chunk, based on the heap protector guards.  */

> +  malloc_init_state (&main_arena);

> +  set_head (main_arena.top, 0);


OK.

> +

>  #if HAVE_MALLOC_INIT_HOOK

>    void (*hook) (void) = atomic_forced_read (__malloc_initialize_hook);

>    if (hook != NULL)

> diff --git a/malloc/hooks.c b/malloc/hooks.c

> index 12995d3..8bb2b74 100644

> --- a/malloc/hooks.c

> +++ b/malloc/hooks.c

> @@ -537,35 +537,38 @@ malloc_set_state (void *msptr)

>       dumped_main_arena_end, realloc and free will recognize these

>       chunks as dumped fake mmapped chunks and never free them.  */

>  

> -  /* Find the chunk with the lowest address with the heap.  */

> -  mchunkptr chunk = NULL;

> +  /* Find the chunk with the lowest address with the heap.  If

> +     successful, size_header will point to the mchunk_size member (not

> +     the chunk start, i.e. the mchunck_prev_size member).  */

> +  size_t *size_header = NULL;

>    {

>      size_t *candidate = (size_t *) ms->sbrk_base;

>      size_t *end = (size_t *) (ms->sbrk_base + ms->sbrked_mem_bytes);

>      while (candidate < end)

>        if (*candidate != 0)

>  	{

> -	  chunk = mem2chunk ((void *) (candidate + 1));

> +	  size_header = candidate;


OK.

>  	  break;

>  	}

>        else

>  	++candidate;

>    }

> -  if (chunk == NULL)

> +  if (size_header == NULL)


OK.

>      return 0;

>  

>    /* Iterate over the dumped heap and patch the chunks so that they

> -     are treated as fake mmapped chunks.  */

> +     are treated as fake mmapped chunks.  We cannot use the regular

> +     accessors because the chunks we read are not yet encrypted.  */

>    mchunkptr top = ms->av[2];

> -  while (chunk < top)

> +  size_t *top_size_header = ((size_t *) top) + 1;

> +  while (size_header < top_size_header)

>      {

> -      if (inuse (chunk))

> -	{

> -	  /* Mark chunk as mmapped, to trigger the fallback path.  */

> -	  size_t size = chunksize (chunk);

> -	  set_head (chunk, size | IS_MMAPPED);

> -	}

> -      chunk = next_chunk (chunk);

> +      size_t size = *size_header & ~SIZE_BITS;

> +      /* We treat all chunks as allocated.  The heap consistency

> +	 checks do not trigger because they are not active for the

> +	 dumped heap.  */

> +      *size_header = HEAP_CRYPT_SIZE (size) | IS_MMAPPED;

> +      size_header += size / sizeof (*size_header);


OK.

>      }

>  

>    /* The dumped fake mmapped chunks all lie in this address range.  */

> diff --git a/malloc/malloc-guard.c b/malloc/malloc-guard.c

> new file mode 100644

> index 0000000..c8b3581

> --- /dev/null

> +++ b/malloc/malloc-guard.c

> @@ -0,0 +1,29 @@

> +/* Heap protector variables.

> +   Copyright (C) 2016 Free Software Foundation, Inc.

> +   This file is part of the GNU C Library.

> +

> +   The GNU C Library is free software; you can redistribute it and/or

> +   modify it under the terms of the GNU Lesser General Public License as

> +   published by the Free Software Foundation; either version 2.1 of the

> +   License, or (at your option) any later version.

> +

> +   The GNU C Library is distributed in the hope that it will be useful,

> +   but WITHOUT ANY WARRANTY; without even the implied warranty of

> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU

> +   Lesser General Public License for more details.

> +

> +   You should have received a copy of the GNU Lesser General Public

> +   License along with the GNU C Library; see the file COPYING.LIB.  If

> +   not, see <http://www.gnu.org/licenses/>.  */

> +

> +/* These variables are defined in a separate file because the static

> +   startup code initializes them, but this should not pull the rest of

> +   the libc malloc implementation into the link.  */

> +

> +#include <malloc-internal.h>

> +

> +/* The heap cookie.  The lowest three bits (corresponding to

> +   SIZE_BITS) in __malloc_guard_header must be clear.  Initialized

> +   during libc startup, and computed by elf/dl-keysetup.c.  */

> +INTERNAL_SIZE_T __malloc_header_guard; /* For size.  */

> +INTERNAL_SIZE_T __malloc_footer_guard; /* For prev_size.  */


OK.

> diff --git a/malloc/malloc-internal.h b/malloc/malloc-internal.h

> index a3df8c3..e723539 100644

> --- a/malloc/malloc-internal.h

> +++ b/malloc/malloc-internal.h

> @@ -81,5 +81,8 @@ void __malloc_fork_unlock_parent (void) internal_function attribute_hidden;

>  /* Called in the child process after a fork.  */

>  void __malloc_fork_unlock_child (void) internal_function attribute_hidden;

>  

> +/* Random values for the heap protector.  */

> +extern INTERNAL_SIZE_T __malloc_header_guard attribute_hidden;

> +extern INTERNAL_SIZE_T __malloc_footer_guard attribute_hidden;


OK.

>  

>  #endif /* _MALLOC_INTERNAL_H */

> diff --git a/malloc/malloc.c b/malloc/malloc.c

> index 72d22bd..e1e732d 100644

> --- a/malloc/malloc.c

> +++ b/malloc/malloc.c

> @@ -1221,6 +1221,10 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

>  /* Mark a chunk as not being on the main arena.  */

>  #define set_non_main_arena(p) ((p)->mchunk_size |= NON_MAIN_ARENA)

>  

> +/* Decrypt a heap header chunk.  */

> +#define HEAP_CRYPT_SIZE(val) (__malloc_header_guard ^ ((INTERNAL_SIZE_T) val))

> +#define HEAP_CRYPT_PREVSIZE(val) \

> +  (__malloc_footer_guard ^ ((INTERNAL_SIZE_T) val))


OK.

>  

>  /*

>     Bits to mask off when extracting size

> @@ -1236,16 +1240,16 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

>  #define chunksize(p) (chunksize_nomask (p) & ~(SIZE_BITS))

>  

>  /* Like chunksize, but do not mask SIZE_BITS.  */

> -#define chunksize_nomask(p)         ((p)->mchunk_size)

> +#define chunksize_nomask(p) HEAP_CRYPT_SIZE ((p)->mchunk_size)


OK.

>  

>  /* Ptr to next physical malloc_chunk. */

>  #define next_chunk(p) ((mchunkptr) (((char *) (p)) + chunksize (p)))

>  

>  /* Size of the chunk below P.  Only valid if prev_inuse (P).  */

> -#define prev_size(p) ((p)->mchunk_prev_size)

> +#define prev_size(p) HEAP_CRYPT_PREVSIZE ((p)->mchunk_prev_size)


OK.

>  

>  /* Set the size of the chunk below P.  Only valid if prev_inuse (P).  */

> -#define set_prev_size(p, sz) ((p)->mchunk_prev_size = (sz))

> +#define set_prev_size(p, sz) ((p)->mchunk_prev_size = HEAP_CRYPT_PREVSIZE (sz))


OK.

>  

>  /* Ptr to previous physical malloc_chunk.  Only valid if prev_inuse (P).  */

>  #define prev_chunk(p) ((mchunkptr) (((char *) (p)) - prev_size (p)))

> @@ -1277,13 +1281,16 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

>  

>  

>  /* Set size at head, without disturbing its use bit */

> -#define set_head_size(p, s)  ((p)->mchunk_size = (((p)->mchunk_size & SIZE_BITS) | (s)))

> +#define set_head_size(p, s) \

> +  ((p)->mchunk_size = ((p)->mchunk_size & SIZE_BITS) | HEAP_CRYPT_SIZE (s))


OK.

>  

>  /* Set size/use field */

> -#define set_head(p, s)       ((p)->mchunk_size = (s))

> +#define set_head(p, s) ((p)->mchunk_size = HEAP_CRYPT_SIZE (s))


OK.

>  

>  /* Set size at footer (only when chunk is not in use) */

> -#define set_foot(p, s)       (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size = (s))

> +#define set_foot(p, s) \

> +  (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size \

> +    = HEAP_CRYPT_PREVSIZE (s))


OK.

>  

>  

>  #pragma GCC poison mchunk_size

> diff --git a/malloc/tst-mallocstate.c b/malloc/tst-mallocstate.c

> index 7e081c5..bd6678e 100644

> --- a/malloc/tst-mallocstate.c

> +++ b/malloc/tst-mallocstate.c

> @@ -186,6 +186,10 @@ struct allocation

>    unsigned int seed;

>  };

>  

> +/* After heap initialization, we can call malloc_usable_size to check

> +   if it gives valid results.  */

> +static bool malloc_usable_size_valid;


OK.

> +

>  /* Check that the allocation task allocation has the expected

>     contents.  */

>  static void

> @@ -221,6 +225,23 @@ check_allocation (const struct allocation *alloc, int index)

>        putc ('\n', stdout);

>        errors = true;

>      }

> +

> +  if (malloc_usable_size_valid)

> +    {

> +      size_t usable = malloc_usable_size (alloc->data);

> +      if (usable < size)

> +        {

> +          printf ("error: allocation %d has reported size %zu (expected %zu)\n",

> +                  index, usable, size);

> +          errors = true;

> +        }

> +      else if (usable > size + 4096)

> +        {

> +          printf ("error: allocation %d reported as %zu bytes (requested %zu)\n",

> +                  index, usable, size);

> +          errors = true;


OK.

> +        }

> +    }

>  }

>  

>  /* A heap allocation combined with pending actions on it.  */

> @@ -317,6 +338,10 @@ init_heap (void)

>        write_message ("error: malloc_set_state failed\n");

>        _exit (1);

>      }

> +

> +  /* The heap has been initialized.  We may now call

> +     malloc_usable_size.  */

> +  malloc_usable_size_valid = true;


OK.

>  }

>  

>  /* Interpose the initialization callback.  */

> diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h

> index f68fdf4..801ded8 100644

> --- a/sysdeps/generic/ldsodefs.h

> +++ b/sysdeps/generic/ldsodefs.h

> @@ -607,6 +607,10 @@ struct rtld_global_ro

>    /* List of auditing interfaces.  */

>    struct audit_ifaces *_dl_audit;

>    unsigned int _dl_naudit;

> +

> +  /* malloc protection keys. */

> +  uintptr_t _dl_malloc_header_guard;

> +  uintptr_t _dl_malloc_footer_guard;


OK.

>  };

>  # define __rtld_global_attribute__

>  # if IS_IN (rtld)



-- 
Cheers,
Carlos.
Paul Eggert Oct. 28, 2016, 5:55 p.m. UTC | #2
Thanks for working on this. The idea looks good. I second Carlos's 
comments on comment and NEWS. Some other (minor) issues:

 > +/* The heap cookie.  The lowest three bits (corresponding to

 > +   SIZE_BITS) in __malloc_guard_header must be clear. Initialized


It's __malloc_header_guard, not __malloc_guard_header.

 > +   during libc startup, and computed by elf/dl-keysetup.c.  */

 > +INTERNAL_SIZE_T __malloc_header_guard; /* For size.  */

 > +INTERNAL_SIZE_T __malloc_footer_guard; /* For prev_size.  */


Why two cookies? Doesn't one suffice, and mightn't that be a tad more 
efficient? The comment should explain.

 > +/* Decrypt a heap header chunk.  */


These macros both encrypt and decrypt, right?  At least, they're used 
that way.  The comment should say so.

Come to think of it, suppose we want to microimprove the implementation 
to use val+guard to encrypt and val-guard to decrypt; this might be bit 
faster on some platforms, and would be safe in unsigned arithmetic. 
Would we then need two sets of macros, one for encryption and one for 
decryption?  Then perhaps we should have two sets of macros now, even 
though the implementations are the same now, to make invoking code 
clearer and more-portable to a +/- approach.

 > +#define HEAP_CRYPT_SIZE(val) (__malloc_header_guard ^ 

((INTERNAL_SIZE_T) val))
 > +#define HEAP_CRYPT_PREVSIZE(val) \

 > +  (__malloc_footer_guard ^ ((INTERNAL_SIZE_T) val))


'val' needs to be parenthesized in these two macro definientia.

 > +#define set_foot(p, s) \

 > +  (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size \

 > +    = HEAP_CRYPT_PREVSIZE (s))


Last line is indented one space too much.

 > +      else if (usable > size + 4096)


Rephrase as 'usable - size > 4096' so that we don't have to worry about 
SIZE being ridiculously large.  The subtraction cannot possibly overflow 
here.

 > +     are treated as fake mmapped chunks.  We cannot use the regular

 > +     accessors because the chunks we read are not yet encrypted.  */


Avoid the royal "we", and use imperative voice instead: "We cannot use" 
-> "Don't use", "the chunks we read" -> "the chunks read".

 > +      /* We treat all chunks as allocated.

...
 > +  /* In the non-shared case, we initialize the heap guard

 > +     directly.  */

...
 > +  /* For a shared library, elf/rtld.c performed key setup in

 > +     security_init, and we copy the keys.  In static builds, the guard

...
 > +/* After heap initialization, we can call malloc_usable_size to check

 > +   if it gives valid results.  */

...
 > +  /* The heap has been initialized.  We may now call

 > +     malloc_usable_size.  */


Similarly, avoid "we".
DJ Delorie Oct. 28, 2016, 8:45 p.m. UTC | #3
My only comments on this one are:

> +/* Decrypt a heap header chunk.  */

> +#define HEAP_CRYPT_SIZE(val) (__malloc_header_guard ^ ((INTERNAL_SIZE_T) val))

> +#define HEAP_CRYPT_PREVSIZE(val) \

> +  (__malloc_footer_guard ^ ((INTERNAL_SIZE_T) val))


For readability and maintainability, we should probably have both CRYPT
and DECRYPT macros, used correcly throughout.  Yes, I know they'd be the
same now, but they might not be later.

> +/* The heap cookie.  The lowest three bits (corresponding to

> +   SIZE_BITS) in __malloc_guard_header must be clear.  Initialized

> +   during libc startup, and computed by elf/dl-keysetup.c.  */


If the lowest three bits must be clear, we should either clear them or
assert that they're clear.
diff mbox

Patch

malloc: Implement heap protector

2016-10-27  Florian Weimer  <fweimer@redhat.com>

	* malloc/mallc-internal.h (__malloc_header_guard)
	(__malloc_footer_guard): Declare.
	* malloc/malloc-guard.c: New file.
	* malloc/Makefile (routines): Add it.
	* malloc/malloc.c (HEAP_CRYPT_SIZE, HEAP_CRYPT_PREVSIZE): Define.
	(chunksize_nomask, prev_size, set_prev_size, set_head_size)
	(set_head, set_foot): Add encryption.
	* malloc/arena.c (ptmalloc_init): For shared builds, initialize
	the heap guard variables.  Initialize the top chunk.
	* malloc/hooks.c (malloc_set_state): Apply the heap guard to the
	dumped heap.
	* malloc/tst-mallocstate.c (malloc_usable_size_valid): New
	variable.
	(check_allocation): Check malloc_usable_size result if
	malloc_usable_size_valid.
	(init_heap): Set malloc_usable_size_valid.
	* csu/libc-start.c (LIBC_START_MAIN): Initialize heap guard
	variables.
	* sysdeps/generic/ldsodefs.h (struct rtld_global_ro): Add members
	_dl_malloc_header_guard, _dl_malloc_footer_guard.
	* elf/rtld.c (security_init): Initialize temporary copy of the
	heap guard variables.

diff --git a/csu/libc-start.c b/csu/libc-start.c
index 333a4cc..ae6abde 100644
--- a/csu/libc-start.c
+++ b/csu/libc-start.c
@@ -22,6 +22,7 @@ 
 #include <ldsodefs.h>
 #include <exit-thread.h>
 #include <elf/dl-keysetup.h>
+#include <malloc/malloc-internal.h>
 
 extern void __libc_init_first (int argc, char **argv, char **envp);
 
@@ -210,6 +211,11 @@  LIBC_START_MAIN (int (*main) (int, char **, char ** MAIN_AUXVEC_DECL),
   __pointer_chk_guard_local = keys.pointer;
 # endif
 
+  /* In the non-shared case, we initialize the heap guard
+     directly.  */
+  __malloc_header_guard = keys.heap_header;
+  __malloc_footer_guard = keys.heap_footer;
+
 #endif
 
   /* Register the destructor of the dynamic linker if there is any.  */
diff --git a/elf/rtld.c b/elf/rtld.c
index de965da..7ea06e4 100644
--- a/elf/rtld.c
+++ b/elf/rtld.c
@@ -42,6 +42,7 @@ 
 #include <stap-probe.h>
 #include <stackinfo.h>
 #include <dl-keysetup.h>
+#include <malloc/malloc-internal.h>
 
 #include <assert.h>
 
@@ -716,6 +717,11 @@  security_init (void)
 #endif
   __pointer_chk_guard_local = keys.pointer;
 
+  /* Keep a copy of the computed keys, so that they can be obtained
+     during malloc initialization in libc.so.  */
+  GLRO (dl_malloc_header_guard) = keys.heap_header;
+  GLRO (dl_malloc_footer_guard) = keys.heap_footer;
+
   /* We do not need the _dl_random value anymore.  The less
      information we leave behind, the better, so clear the
      variable.  */
diff --git a/malloc/Makefile b/malloc/Makefile
index b8efcd6..cd289f8 100644
--- a/malloc/Makefile
+++ b/malloc/Makefile
@@ -41,7 +41,7 @@  tests-static := \
 tests += $(tests-static)
 test-srcs = tst-mtrace
 
-routines = malloc morecore mcheck mtrace obstack \
+routines = malloc morecore mcheck mtrace obstack malloc-guard \
   scratch_buffer_grow scratch_buffer_grow_preserve \
   scratch_buffer_set_array_size
 
diff --git a/malloc/arena.c b/malloc/arena.c
index f85b0af..792451a 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -340,6 +340,19 @@  ptmalloc_init (void)
       if (check_action != 0)
         __malloc_check_init ();
     }
+
+#ifdef SHARED
+  /* For a shared library, elf/rtld.c performed key setup in
+     security_init, and we copy the keys.  In static builds, the guard
+     cookies have already been initialized in csu/libc-start.c.  */
+  __malloc_header_guard = GLRO (dl_malloc_header_guard);
+  __malloc_footer_guard = GLRO (dl_malloc_footer_guard);
+#endif
+
+  /* Initialize the top chunk, based on the heap protector guards.  */
+  malloc_init_state (&main_arena);
+  set_head (main_arena.top, 0);
+
 #if HAVE_MALLOC_INIT_HOOK
   void (*hook) (void) = atomic_forced_read (__malloc_initialize_hook);
   if (hook != NULL)
diff --git a/malloc/hooks.c b/malloc/hooks.c
index 12995d3..8bb2b74 100644
--- a/malloc/hooks.c
+++ b/malloc/hooks.c
@@ -537,35 +537,38 @@  malloc_set_state (void *msptr)
      dumped_main_arena_end, realloc and free will recognize these
      chunks as dumped fake mmapped chunks and never free them.  */
 
-  /* Find the chunk with the lowest address with the heap.  */
-  mchunkptr chunk = NULL;
+  /* Find the chunk with the lowest address with the heap.  If
+     successful, size_header will point to the mchunk_size member (not
+     the chunk start, i.e. the mchunck_prev_size member).  */
+  size_t *size_header = NULL;
   {
     size_t *candidate = (size_t *) ms->sbrk_base;
     size_t *end = (size_t *) (ms->sbrk_base + ms->sbrked_mem_bytes);
     while (candidate < end)
       if (*candidate != 0)
 	{
-	  chunk = mem2chunk ((void *) (candidate + 1));
+	  size_header = candidate;
 	  break;
 	}
       else
 	++candidate;
   }
-  if (chunk == NULL)
+  if (size_header == NULL)
     return 0;
 
   /* Iterate over the dumped heap and patch the chunks so that they
-     are treated as fake mmapped chunks.  */
+     are treated as fake mmapped chunks.  We cannot use the regular
+     accessors because the chunks we read are not yet encrypted.  */
   mchunkptr top = ms->av[2];
-  while (chunk < top)
+  size_t *top_size_header = ((size_t *) top) + 1;
+  while (size_header < top_size_header)
     {
-      if (inuse (chunk))
-	{
-	  /* Mark chunk as mmapped, to trigger the fallback path.  */
-	  size_t size = chunksize (chunk);
-	  set_head (chunk, size | IS_MMAPPED);
-	}
-      chunk = next_chunk (chunk);
+      size_t size = *size_header & ~SIZE_BITS;
+      /* We treat all chunks as allocated.  The heap consistency
+	 checks do not trigger because they are not active for the
+	 dumped heap.  */
+      *size_header = HEAP_CRYPT_SIZE (size) | IS_MMAPPED;
+      size_header += size / sizeof (*size_header);
     }
 
   /* The dumped fake mmapped chunks all lie in this address range.  */
diff --git a/malloc/malloc-guard.c b/malloc/malloc-guard.c
new file mode 100644
index 0000000..c8b3581
--- /dev/null
+++ b/malloc/malloc-guard.c
@@ -0,0 +1,29 @@ 
+/* Heap protector variables.
+   Copyright (C) 2016 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public License as
+   published by the Free Software Foundation; either version 2.1 of the
+   License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; see the file COPYING.LIB.  If
+   not, see <http://www.gnu.org/licenses/>.  */
+
+/* These variables are defined in a separate file because the static
+   startup code initializes them, but this should not pull the rest of
+   the libc malloc implementation into the link.  */
+
+#include <malloc-internal.h>
+
+/* The heap cookie.  The lowest three bits (corresponding to
+   SIZE_BITS) in __malloc_guard_header must be clear.  Initialized
+   during libc startup, and computed by elf/dl-keysetup.c.  */
+INTERNAL_SIZE_T __malloc_header_guard; /* For size.  */
+INTERNAL_SIZE_T __malloc_footer_guard; /* For prev_size.  */
diff --git a/malloc/malloc-internal.h b/malloc/malloc-internal.h
index a3df8c3..e723539 100644
--- a/malloc/malloc-internal.h
+++ b/malloc/malloc-internal.h
@@ -81,5 +81,8 @@  void __malloc_fork_unlock_parent (void) internal_function attribute_hidden;
 /* Called in the child process after a fork.  */
 void __malloc_fork_unlock_child (void) internal_function attribute_hidden;
 
+/* Random values for the heap protector.  */
+extern INTERNAL_SIZE_T __malloc_header_guard attribute_hidden;
+extern INTERNAL_SIZE_T __malloc_footer_guard attribute_hidden;
 
 #endif /* _MALLOC_INTERNAL_H */
diff --git a/malloc/malloc.c b/malloc/malloc.c
index 72d22bd..e1e732d 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1221,6 +1221,10 @@  nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 /* Mark a chunk as not being on the main arena.  */
 #define set_non_main_arena(p) ((p)->mchunk_size |= NON_MAIN_ARENA)
 
+/* Decrypt a heap header chunk.  */
+#define HEAP_CRYPT_SIZE(val) (__malloc_header_guard ^ ((INTERNAL_SIZE_T) val))
+#define HEAP_CRYPT_PREVSIZE(val) \
+  (__malloc_footer_guard ^ ((INTERNAL_SIZE_T) val))
 
 /*
    Bits to mask off when extracting size
@@ -1236,16 +1240,16 @@  nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 #define chunksize(p) (chunksize_nomask (p) & ~(SIZE_BITS))
 
 /* Like chunksize, but do not mask SIZE_BITS.  */
-#define chunksize_nomask(p)         ((p)->mchunk_size)
+#define chunksize_nomask(p) HEAP_CRYPT_SIZE ((p)->mchunk_size)
 
 /* Ptr to next physical malloc_chunk. */
 #define next_chunk(p) ((mchunkptr) (((char *) (p)) + chunksize (p)))
 
 /* Size of the chunk below P.  Only valid if prev_inuse (P).  */
-#define prev_size(p) ((p)->mchunk_prev_size)
+#define prev_size(p) HEAP_CRYPT_PREVSIZE ((p)->mchunk_prev_size)
 
 /* Set the size of the chunk below P.  Only valid if prev_inuse (P).  */
-#define set_prev_size(p, sz) ((p)->mchunk_prev_size = (sz))
+#define set_prev_size(p, sz) ((p)->mchunk_prev_size = HEAP_CRYPT_PREVSIZE (sz))
 
 /* Ptr to previous physical malloc_chunk.  Only valid if prev_inuse (P).  */
 #define prev_chunk(p) ((mchunkptr) (((char *) (p)) - prev_size (p)))
@@ -1277,13 +1281,16 @@  nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 
 
 /* Set size at head, without disturbing its use bit */
-#define set_head_size(p, s)  ((p)->mchunk_size = (((p)->mchunk_size & SIZE_BITS) | (s)))
+#define set_head_size(p, s) \
+  ((p)->mchunk_size = ((p)->mchunk_size & SIZE_BITS) | HEAP_CRYPT_SIZE (s))
 
 /* Set size/use field */
-#define set_head(p, s)       ((p)->mchunk_size = (s))
+#define set_head(p, s) ((p)->mchunk_size = HEAP_CRYPT_SIZE (s))
 
 /* Set size at footer (only when chunk is not in use) */
-#define set_foot(p, s)       (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size = (s))
+#define set_foot(p, s) \
+  (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size \
+    = HEAP_CRYPT_PREVSIZE (s))
 
 
 #pragma GCC poison mchunk_size
diff --git a/malloc/tst-mallocstate.c b/malloc/tst-mallocstate.c
index 7e081c5..bd6678e 100644
--- a/malloc/tst-mallocstate.c
+++ b/malloc/tst-mallocstate.c
@@ -186,6 +186,10 @@  struct allocation
   unsigned int seed;
 };
 
+/* After heap initialization, we can call malloc_usable_size to check
+   if it gives valid results.  */
+static bool malloc_usable_size_valid;
+
 /* Check that the allocation task allocation has the expected
    contents.  */
 static void
@@ -221,6 +225,23 @@  check_allocation (const struct allocation *alloc, int index)
       putc ('\n', stdout);
       errors = true;
     }
+
+  if (malloc_usable_size_valid)
+    {
+      size_t usable = malloc_usable_size (alloc->data);
+      if (usable < size)
+        {
+          printf ("error: allocation %d has reported size %zu (expected %zu)\n",
+                  index, usable, size);
+          errors = true;
+        }
+      else if (usable > size + 4096)
+        {
+          printf ("error: allocation %d reported as %zu bytes (requested %zu)\n",
+                  index, usable, size);
+          errors = true;
+        }
+    }
 }
 
 /* A heap allocation combined with pending actions on it.  */
@@ -317,6 +338,10 @@  init_heap (void)
       write_message ("error: malloc_set_state failed\n");
       _exit (1);
     }
+
+  /* The heap has been initialized.  We may now call
+     malloc_usable_size.  */
+  malloc_usable_size_valid = true;
 }
 
 /* Interpose the initialization callback.  */
diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h
index f68fdf4..801ded8 100644
--- a/sysdeps/generic/ldsodefs.h
+++ b/sysdeps/generic/ldsodefs.h
@@ -607,6 +607,10 @@  struct rtld_global_ro
   /* List of auditing interfaces.  */
   struct audit_ifaces *_dl_audit;
   unsigned int _dl_naudit;
+
+  /* malloc protection keys. */
+  uintptr_t _dl_malloc_header_guard;
+  uintptr_t _dl_malloc_footer_guard;
 };
 # define __rtld_global_attribute__
 # if IS_IN (rtld)