From patchwork Fri Oct 3 15:11:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Newton X-Patchwork-Id: 38332 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 651742057C for ; Fri, 3 Oct 2014 15:12:30 +0000 (UTC) Received: by mail-lb0-f200.google.com with SMTP id b6sf821559lbj.11 for ; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-subscribe:list-archive :list-post:list-help:sender:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results; bh=zbcBl/XeeTD9v2DnVoFinxf1qMdkBqF4ugoHiUmMORc=; b=OxbGoEFkyXzlxkaGTXyc6JdU9lV8wGi+sZBOVwLsZcxogTSFJaBxFkXIE69u7SKAVr G0wokfbxMD7SQSZ0Eyec7AWk0XDN+6PKVDggQMNsrYkIzeKU6cZpwQ8pU3OYEI5Al4/+ X6bai8U1CxGv+Shlqex1IlQvjv5Dt7mo4Vjw8d/7bqyTIQpAhjWE+SdCaOJM3hbdr/+2 mZT2lujcpaDCulvHtUFVDMPZTyHblX0/kYmDthCH9ciLaP0c3aqRIWi0WvKLZQa8fHoy Llg0qtRJA0svvXmE0NbWZ5pM7bq3ChnyFJZj4SYRSNCOeG+PPhJSloyUo0b7cZZt0oc5 8E6A== X-Gm-Message-State: ALoCoQn50gRkkCs7Tmwymwj3vQrf8OrGauJUNsyCR2YGEN9YUnMPtTzOrnJgVG7dnxq4ITIWSxYz X-Received: by 10.112.74.80 with SMTP id r16mr50175lbv.21.1412349148627; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.88.8 with SMTP id bc8ls378219lab.86.gmail; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) X-Received: by 10.152.23.99 with SMTP id l3mr6562422laf.39.1412349148396; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) Received: from mail-lb0-x235.google.com (mail-lb0-x235.google.com [2a00:1450:4010:c04::235]) by mx.google.com with ESMTPS id ir4si11556808lac.116.2014.10.03.08.12.28 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 Oct 2014 08:12:28 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::235 as permitted sender) client-ip=2a00:1450:4010:c04::235; Received: by mail-lb0-f181.google.com with SMTP id l4so1149259lbv.26 for ; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) X-Received: by 10.112.75.233 with SMTP id f9mr3865418lbw.102.1412349148320; Fri, 03 Oct 2014 08:12:28 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp246041lbb; Fri, 3 Oct 2014 08:12:27 -0700 (PDT) X-Received: by 10.66.121.168 with SMTP id ll8mr7569954pab.121.1412349146806; Fri, 03 Oct 2014 08:12:26 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id jd5si7360499pbd.188.2014.10.03.08.12.25 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Oct 2014 08:12:26 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-return-53277-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 3632 invoked by alias); 3 Oct 2014 15:11:43 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 3519 invoked by uid 89); 3 Oct 2014 15:11:42 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wi0-f172.google.com X-Received: by 10.194.92.116 with SMTP id cl20mr8121489wjb.101.1412349095884; Fri, 03 Oct 2014 08:11:35 -0700 (PDT) From: Will Newton To: libc-alpha@sourceware.org Subject: [PATCH 2/3] sysdeps/arm/bits/atomic.h: Add a wider range of atomic operations Date: Fri, 3 Oct 2014 16:11:25 +0100 Message-Id: <1412349086-11473-3-git-send-email-will.newton@linaro.org> In-Reply-To: <1412349086-11473-1-git-send-email-will.newton@linaro.org> References: <1412349086-11473-1-git-send-email-will.newton@linaro.org> X-Original-Sender: will.newton@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::235 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 For the case where atomic operations are fully supported by the compiler expose more of these operations directly to glibc. So for example, instead of implementing atomic_or using the compare and exchange compiler builtin, implement it by using the atomic or compiler builtin directly. This results in an approximate 1kB code size reduction in libc.so and a small improvement on the malloc benchtest: Before: 266.279 After: 259.073 ChangeLog: 2014-10-03 Will Newton * sysdeps/arm/bits/atomic.h [__GNUC_PREREQ (4, 7) && __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4] (__arch_compare_and_exchange_bool_8_int): Define in terms of gcc atomic builtin rather than link error. (__arch_compare_and_exchange_bool_16_int): Likewise. (__arch_compare_and_exchange_bool_64_int): Likewise. (__arch_compare_and_exchange_val_8_int): Likewise. (__arch_compare_and_exchange_val_16_int): Likewise. (__arch_compare_and_exchange_val_64_int): Likewise. (__arch_exchange_8_int): Likewise. (__arch_exchange_16_int): Likewise. (__arch_exchange_64_int): Likewise. (__arch_exchange_and_add_8_int): New define. (__arch_exchange_and_add_16_int): Likewise. (__arch_exchange_and_add_32_int): Likewise. (__arch_exchange_and_add_64_int): Likewise. (atomic_exchange_and_add_acq): Likewise. (atomic_exchange_and_add_rel): Likewise. (catomic_exchange_and_add): Likewise. (__arch_exchange_and_and_8_int): New define. (__arch_exchange_and_and_16_int): Likewise. (__arch_exchange_and_and_32_int): Likewise. (__arch_exchange_and_and_64_int): Likewise. (atomic_and): Likewise. (atomic_and_val): Likewise. (catomic_and): Likewise. (__arch_exchange_and_or_8_int): New define. (__arch_exchange_and_or_16_int): Likewise. (__arch_exchange_and_or_32_int): Likewise. (__arch_exchange_and_or_64_int): Likewise. (atomic_or): Likewise. (atomic_or_val): Likewise. (catomic_or): Likewise. --- sysdeps/arm/bits/atomic.h | 203 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 153 insertions(+), 50 deletions(-) diff --git a/sysdeps/arm/bits/atomic.h b/sysdeps/arm/bits/atomic.h index 88cbe67..be314e4 100644 --- a/sysdeps/arm/bits/atomic.h +++ b/sysdeps/arm/bits/atomic.h @@ -52,84 +52,184 @@ void __arm_link_error (void); a pattern to do this efficiently. */ #if __GNUC_PREREQ (4, 7) && defined __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 -# define atomic_exchange_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) +/* Compare and exchange. + For all "bool" routines, we return FALSE if exchange successful. */ -# define atomic_exchange_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) +# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) -/* Atomic exchange (without compare). */ +# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) -# define __arch_exchange_8_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) +# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) -# define __arch_exchange_16_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) +# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) -# define __arch_exchange_32_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) +# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) -# define __arch_exchange_64_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) /* Compare and exchange with "acquire" semantics, ie barrier after. */ -# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) +# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ + __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ + mem, new, old, __ATOMIC_ACQUIRE) -# define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) +# define atomic_compare_and_exchange_val_acq(mem, new, old) \ + __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ + mem, new, old, __ATOMIC_ACQUIRE) /* Compare and exchange with "release" semantics, ie barrier before. */ -# define atomic_compare_and_exchange_bool_rel(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_RELEASE) +# define atomic_compare_and_exchange_bool_rel(mem, new, old) \ + __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ + mem, new, old, __ATOMIC_RELEASE) -# define atomic_compare_and_exchange_val_rel(mem, new, old) \ +# define atomic_compare_and_exchange_val_rel(mem, new, old) \ __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ mem, new, old, __ATOMIC_RELEASE) -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ -# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) +/* Atomic exchange (without compare). */ -# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) +# define __arch_exchange_8_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) -# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) +# define __arch_exchange_16_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) +# define __arch_exchange_32_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) -# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) +# define __arch_exchange_64_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) -# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) +# define atomic_exchange_acq(mem, value) \ + __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) -# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) +# define atomic_exchange_rel(mem, value) \ + __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) + + +/* Atomically add value and return the previous (unincremented) value. */ + +# define __arch_exchange_and_add_8_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +# define __arch_exchange_and_add_16_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +# define __arch_exchange_and_add_32_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +# define __arch_exchange_and_add_64_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +# define atomic_exchange_and_add_acq(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ + __ATOMIC_ACQUIRE) + +# define atomic_exchange_and_add_rel(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ + __ATOMIC_RELEASE) + +# define catomic_exchange_and_add atomic_exchange_and_add + +/* Atomically bitwise and value and return the previous value. */ + +# define __arch_exchange_and_and_8_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +# define __arch_exchange_and_and_16_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) +# define __arch_exchange_and_and_32_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +# define __arch_exchange_and_and_64_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +# define atomic_and(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_and, int, mem, value, \ + __ATOMIC_ACQUIRE) + +# define atomic_and_val atomic_and + +# define catomic_and atomic_and + +/* Atomically bitwise or value and return the previous value. */ + +# define __arch_exchange_and_or_8_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +# define __arch_exchange_and_or_16_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +# define __arch_exchange_and_or_32_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +# define __arch_exchange_and_or_64_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +# define atomic_or(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_or, int, mem, value, \ + __ATOMIC_ACQUIRE) + +# define atomic_or_val atomic_or + +# define catomic_or atomic_or #elif defined __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 + /* Atomic compare and exchange. */ + # define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ __sync_val_compare_and_swap ((mem), (oldval), (newval)) #else @@ -138,8 +238,10 @@ void __arm_link_error (void); #endif #if !__GNUC_PREREQ (4, 7) || !defined (__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4) + /* We don't support atomic operations on any non-word types. So make them link errors. */ + # define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ ({ __arm_link_error (); oldval; }) @@ -148,6 +250,7 @@ void __arm_link_error (void); # define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ ({ __arm_link_error (); oldval; }) + #endif /* An OS-specific bits/atomic.h file will define this macro if