From patchwork Fri Oct 17 15:31:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Newton X-Patchwork-Id: 38883 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A1DE82054E for ; Fri, 17 Oct 2014 15:32:36 +0000 (UTC) Received: by mail-lb0-f200.google.com with SMTP id b6sf620711lbj.3 for ; Fri, 17 Oct 2014 08:32:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-subscribe:list-archive :list-post:list-help:sender:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results; bh=BziF8XEllUcQiTc53SuAPk4zG3nZcmHW6Zk2rFOCcNY=; b=Wt86cNyb1wHwo8qkjDgCACZ6b7BDXZH/AyJiGgt/cfwxquRH8bQ8zD1dIt2J2fjot4 olwXK6RRSM9YFgtj7qICttuUz6R4xkM2LOfbQosOTxDdQ0g1KJWwc6lImiss2BHRu/2Q TWf2ISXB2e5ZFeGwhs5vGNo0IWQuMVP9byi2LQK3IFmYKNVY8Fga4eJaSYaM2AlWY3zu 0fVItYvb33JIXzjJNAsibHa3js7RWy566nRaB1iHo4GPuvkmAoRXqHSKAqced7+l3DhC WNPj4NjoiCB9+kkdjeBPEKVOhwcEWwjpElaopfYMhwsmV8x/Uml3sbZzI9Bzmi5RTyP+ UWWw== X-Gm-Message-State: ALoCoQla7Rnb6G1Pi1X1YqNHxyhj150mNPIWrPw6nzbfvFKXyIrln2wfnIZEDvERuGLfiCclR5iX X-Received: by 10.180.74.130 with SMTP id t2mr2150400wiv.4.1413559955185; Fri, 17 Oct 2014 08:32:35 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.120.169 with SMTP id ld9ls271995lab.12.gmail; Fri, 17 Oct 2014 08:32:35 -0700 (PDT) X-Received: by 10.112.218.98 with SMTP id pf2mr9598204lbc.41.1413559955027; Fri, 17 Oct 2014 08:32:35 -0700 (PDT) Received: from mail-la0-x22c.google.com (mail-la0-x22c.google.com. [2a00:1450:4010:c03::22c]) by mx.google.com with ESMTPS id rr9si2525065lbb.113.2014.10.17.08.32.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Oct 2014 08:32:35 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22c as permitted sender) client-ip=2a00:1450:4010:c03::22c; Received: by mail-la0-f44.google.com with SMTP id hs14so920882lab.3 for ; Fri, 17 Oct 2014 08:32:34 -0700 (PDT) X-Received: by 10.112.77.74 with SMTP id q10mr1339431lbw.66.1413559954943; Fri, 17 Oct 2014 08:32:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp249877lbz; Fri, 17 Oct 2014 08:32:33 -0700 (PDT) X-Received: by 10.70.27.225 with SMTP id w1mr9437666pdg.40.1413559953220; Fri, 17 Oct 2014 08:32:33 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id s9si1490576pdm.59.2014.10.17.08.32.32 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Oct 2014 08:32:33 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-return-53596-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 16864 invoked by alias); 17 Oct 2014 15:31:39 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 16794 invoked by uid 89); 17 Oct 2014 15:31:39 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f49.google.com X-Received: by 10.180.182.77 with SMTP id ec13mr29104010wic.10.1413559893790; Fri, 17 Oct 2014 08:31:33 -0700 (PDT) From: Will Newton To: libc-alpha@sourceware.org Subject: [PATCH 4/5] sysdeps/aarch64/bits/atomic.h: Switch to generic implementation Date: Fri, 17 Oct 2014 16:31:21 +0100 Message-Id: <1413559882-959-5-git-send-email-will.newton@linaro.org> In-Reply-To: <1413559882-959-1-git-send-email-will.newton@linaro.org> References: <1413559882-959-1-git-send-email-will.newton@linaro.org> X-Original-Sender: will.newton@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22c as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 Switch the AArch64 port to using the generic GCC intrinsic based atomic implementation. ChangeLog: 2014-10-15 Will Newton * sysdeps/aarch64/bits/atomic.h (__ARCH_ATOMIC_64_SUPPORTED): Define to 1. Include sysdeps/generic/atomic_types.h. Include sysdeps/generic/atomic.h. Remove existing atomic defines and tyepdefs. --- sysdeps/aarch64/bits/atomic.h | 151 +----------------------------------------- 1 file changed, 3 insertions(+), 148 deletions(-) diff --git a/sysdeps/aarch64/bits/atomic.h b/sysdeps/aarch64/bits/atomic.h index 456e2ec..a44f543 100644 --- a/sysdeps/aarch64/bits/atomic.h +++ b/sysdeps/aarch64/bits/atomic.h @@ -19,153 +19,8 @@ #ifndef _AARCH64_BITS_ATOMIC_H #define _AARCH64_BITS_ATOMIC_H 1 -#include - -typedef int8_t atomic8_t; -typedef int16_t atomic16_t; -typedef int32_t atomic32_t; -typedef int64_t atomic64_t; - -typedef uint8_t uatomic8_t; -typedef uint16_t uatomic16_t; -typedef uint32_t uatomic32_t; -typedef uint64_t uatomic64_t; - -typedef intptr_t atomicptr_t; -typedef uintptr_t uatomicptr_t; -typedef intmax_t atomic_max_t; -typedef uintmax_t uatomic_max_t; - - -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - - -/* Compare and exchange with "acquire" semantics, ie barrier after. */ - -# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -# define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -/* Compare and exchange with "release" semantics, ie barrier before. */ - -# define atomic_compare_and_exchange_bool_rel(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_RELEASE) - -# define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_RELEASE) - - -/* Atomic exchange (without compare). */ - -# define __arch_exchange_8_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_16_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_32_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_64_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define atomic_exchange_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) - -# define atomic_exchange_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) - - -/* Atomically add value and return the previous (unincremented) value. */ - -# define __arch_exchange_and_add_8_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_16_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_32_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_64_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define atomic_exchange_and_add_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_ACQUIRE) - -# define atomic_exchange_and_add_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_RELEASE) - -/* Barrier macro. */ -#define atomic_full_barrier() __sync_synchronize() +# define __ARCH_ATOMIC_64_SUPPORTED 1 +# include +# include #endif