From patchwork Thu Nov 7 17:41:23 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Newton X-Patchwork-Id: 21400 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f200.google.com (mail-ob0-f200.google.com [209.85.214.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3A50825E16 for ; Thu, 7 Nov 2013 17:41:28 +0000 (UTC) Received: by mail-ob0-f200.google.com with SMTP id uy5sf1009737obc.3 for ; Thu, 07 Nov 2013 09:41:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:date:from:user-agent :mime-version:to:cc:subject:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=Ult5jEDIVdRzOyRKfdhwgFZOUTuY/NKMnbVkYPaQfG0=; b=LJG3OHG1ZUCrnXrJBksig5xsSJ7We4yODyfQNxoL3y412uIrBkMRte8FitK0ZENg0a 0QzMyDOfNqcpb3qTsG661xPyc+hyKU50wwYgn74r/UoC3pOWwksH82lpdW74bFp5OoyC JdKibqf0/QgHZNckKgrvzH5yXAh8YCn/8sKLtacpOfglFUQ3pgNivaFT9RjmpGZ1nlA/ oUas9g++qURYAFsU7nn/zgT5PEnfQBWk2lU00rYOG3wp73Yty+aOajAbZM9aSY+FgGPM GUeunMWg14LszgJz9qFeadCU1lcyE5VAesP3iKjZHaCso20I64W5L59pxxQmbHVDJ7i1 28iw== X-Gm-Message-State: ALoCoQmX2W90J6btiso/G7ojhvNQTpb4n2WKDPursXrpMdnSq+vqKVgdZmO4noyXLO58KcSCrbZ+ X-Received: by 10.43.65.81 with SMTP id xl17mr2083926icb.29.1383846087493; Thu, 07 Nov 2013 09:41:27 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.121.1 with SMTP id lg1ls1240668qeb.90.gmail; Thu, 07 Nov 2013 09:41:27 -0800 (PST) X-Received: by 10.58.186.173 with SMTP id fl13mr2179795vec.31.1383846087294; Thu, 07 Nov 2013 09:41:27 -0800 (PST) Received: from mail-ve0-f174.google.com (mail-ve0-f174.google.com [209.85.128.174]) by mx.google.com with ESMTPS id op1si2055576vcb.42.2013.11.07.09.41.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 07 Nov 2013 09:41:27 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.174; Received: by mail-ve0-f174.google.com with SMTP id pa12so647323veb.19 for ; Thu, 07 Nov 2013 09:41:27 -0800 (PST) X-Received: by 10.221.18.70 with SMTP id qf6mr1353548vcb.37.1383846087213; Thu, 07 Nov 2013 09:41:27 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp37495vcz; Thu, 7 Nov 2013 09:41:26 -0800 (PST) X-Received: by 10.14.94.199 with SMTP id n47mr3083298eef.81.1383846086258; Thu, 07 Nov 2013 09:41:26 -0800 (PST) Received: from mail-ee0-f49.google.com (mail-ee0-f49.google.com [74.125.83.49]) by mx.google.com with ESMTPS id t7si3875010eeh.67.2013.11.07.09.41.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 07 Nov 2013 09:41:26 -0800 (PST) Received-SPF: neutral (google.com: 74.125.83.49 is neither permitted nor denied by best guess record for domain of will.newton@linaro.org) client-ip=74.125.83.49; Received: by mail-ee0-f49.google.com with SMTP id e52so477556eek.22 for ; Thu, 07 Nov 2013 09:41:25 -0800 (PST) X-Received: by 10.14.104.5 with SMTP id h5mr10725227eeg.58.1383846085509; Thu, 07 Nov 2013 09:41:25 -0800 (PST) Received: from localhost.localdomain (cpc6-seac21-2-0-cust453.7-2.cable.virginm.net. [82.1.113.198]) by mx.google.com with ESMTPSA id a6sm11882348eei.10.2013.11.07.09.41.24 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 07 Nov 2013 09:41:24 -0800 (PST) Message-ID: <527BD0C3.4010607@linaro.org> Date: Thu, 07 Nov 2013 17:41:23 +0000 From: Will Newton User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130805 Thunderbird/17.0.8 MIME-Version: 1.0 To: libc-alpha@sourceware.org CC: Patch Tracking Subject: [PATCH 1/2] malloc/malloc.c: Validate SIZE passed to aligned_alloc. X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.newton@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The ISO C11 standard specifies that a SIZE passed to aligned_alloc must be a multiple of ALIGNMENT. Aliasing aligned_alloc to memalign does not enforce this restriction, so create a new function that does this validation. ChangeLog: 2013-11-07 Will Newton * malloc/malloc.c (__aligned_alloc): New function. (__libc_memalign): Move main body of the code to _int_aligned_alloc and call that function. (_int_aligned_alloc): New function. --- malloc/malloc.c | 97 +++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 60 insertions(+), 37 deletions(-) diff --git a/malloc/malloc.c b/malloc/malloc.c index 897c43a..67ad141 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -1054,6 +1054,7 @@ static void _int_free(mstate, mchunkptr, int); static void* _int_realloc(mstate, mchunkptr, INTERNAL_SIZE_T, INTERNAL_SIZE_T); static void* _int_memalign(mstate, size_t, size_t); +static void* _int_aligned_alloc(size_t, size_t); static void* _int_valloc(mstate, size_t); static void* _int_pvalloc(mstate, size_t); static void malloc_printerr(int action, const char *str, void *ptr); @@ -3000,56 +3001,34 @@ __libc_realloc(void* oldmem, size_t bytes) libc_hidden_def (__libc_realloc) void* -__libc_memalign(size_t alignment, size_t bytes) +__aligned_alloc(size_t alignment, size_t bytes) { - mstate ar_ptr; - void *p; - void *(*hook) (size_t, size_t, const void *) = force_reg (__memalign_hook); if (__builtin_expect (hook != NULL, 0)) return (*hook)(alignment, bytes, RETURN_ADDRESS (0)); - /* If need less alignment than we give anyway, just relay to malloc */ - if (alignment <= MALLOC_ALIGNMENT) return __libc_malloc(bytes); - - /* Otherwise, ensure that it is at least a minimum chunk size */ - if (alignment < MINSIZE) alignment = MINSIZE; - - /* If the alignment is greater than SIZE_MAX / 2 + 1 it cannot be a - power of 2 and will cause overflow in the check below. */ - if (alignment > SIZE_MAX / 2 + 1) + /* Check size is integral multiple of alignment. */ + if (bytes % alignment != 0) { __set_errno (EINVAL); return 0; } - /* Check for overflow. */ - if (bytes > SIZE_MAX - alignment - MINSIZE) - { - __set_errno (ENOMEM); - return 0; - } + return _int_aligned_alloc(alignment, bytes); +} +weak_alias (__aligned_alloc, aligned_alloc) - arena_get(ar_ptr, bytes + alignment + MINSIZE); - if(!ar_ptr) - return 0; - p = _int_memalign(ar_ptr, alignment, bytes); - if(!p) { - LIBC_PROBE (memory_memalign_retry, 2, bytes, alignment); - ar_ptr = arena_get_retry (ar_ptr, bytes); - if (__builtin_expect(ar_ptr != NULL, 1)) { - p = _int_memalign(ar_ptr, alignment, bytes); - (void)mutex_unlock(&ar_ptr->mutex); - } - } else - (void)mutex_unlock(&ar_ptr->mutex); - assert(!p || chunk_is_mmapped(mem2chunk(p)) || - ar_ptr == arena_for_chunk(mem2chunk(p))); - return p; +void* +__libc_memalign(size_t alignment, size_t bytes) +{ + void *(*hook) (size_t, size_t, const void *) = + force_reg (__memalign_hook); + if (__builtin_expect (hook != NULL, 0)) + return (*hook)(alignment, bytes, RETURN_ADDRESS (0)); + + return _int_aligned_alloc(alignment, bytes); } -/* For ISO C11. */ -weak_alias (__libc_memalign, aligned_alloc) libc_hidden_def (__libc_memalign) void* @@ -4404,6 +4383,50 @@ _int_memalign(mstate av, size_t alignment, size_t bytes) return chunk2mem(p); } +static void * +_int_aligned_alloc(size_t alignment, size_t bytes) +{ + mstate ar_ptr; + void *p; + + /* If need less alignment than we give anyway, just relay to malloc */ + if (alignment <= MALLOC_ALIGNMENT) return __libc_malloc(bytes); + + /* Otherwise, ensure that it is at least a minimum chunk size */ + if (alignment < MINSIZE) alignment = MINSIZE; + + /* If the alignment is greater than SIZE_MAX / 2 + 1 it cannot be a + power of 2 and will cause overflow in the check below. */ + if (alignment > SIZE_MAX / 2 + 1) + { + __set_errno (EINVAL); + return 0; + } + + /* Check for overflow. */ + if (bytes > SIZE_MAX - alignment - MINSIZE) + { + __set_errno (ENOMEM); + return 0; + } + + arena_get(ar_ptr, bytes + alignment + MINSIZE); + if(!ar_ptr) + return 0; + p = _int_memalign(ar_ptr, alignment, bytes); + if(!p) { + LIBC_PROBE (memory_memalign_retry, 2, bytes, alignment); + ar_ptr = arena_get_retry (ar_ptr, bytes); + if (__builtin_expect(ar_ptr != NULL, 1)) { + p = _int_memalign(ar_ptr, alignment, bytes); + (void)mutex_unlock(&ar_ptr->mutex); + } + } else + (void)mutex_unlock(&ar_ptr->mutex); + assert(!p || chunk_is_mmapped(mem2chunk(p)) || + ar_ptr == arena_for_chunk(mem2chunk(p))); + return p; +} /* ------------------------------ valloc ------------------------------