From patchwork Fri Oct 17 15:31:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Newton X-Patchwork-Id: 38882 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5B3092054E for ; Fri, 17 Oct 2014 15:32:23 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id a1sf614567wgh.9 for ; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-subscribe:list-archive :list-post:list-help:sender:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results; bh=Y4NHQ4YKiH10bN8N57K/FtELYF/4eC5oLyG1j1QWnDs=; b=JPWw2T7FuisX+3qiXtkZlhdYpHqKXqKCdfB2xX12lRbxF2DlP9O4GovTForbmQFJ19 e0MuuC5yWNtf8KOav8iK8JAbaWpMkrUp16owKYuVimBY6UaMO1zbhJaiUkNf4ZpByYHG Kp2y13AirAzQd71UWh6mg+a1/oTK8NJ1zLuFu1dYEMlA+lJEEoMT3DVsVVEoYAXs1r1t WclTAuGSNwIwidvZX70winlZ4YFdteDGj4s+8Rwnyzza7DqNZFZDKtvWnCZkgmsvJbDU SnvnpGZX08vmZkcporVfw21bsPuys47imOy3HNSU4S62mc8Sj8dCZ2v9gOTXq827Z1vf 0BiQ== X-Gm-Message-State: ALoCoQkZjW7nrUoWbhj4qrjvcUO54HLBHAUBoYQF/eoWux++iA7m9R66ofgS9RbPebp5UcsgwAAx X-Received: by 10.112.40.161 with SMTP id y1mr112927lbk.13.1413559942499; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.206.99 with SMTP id ln3ls66654lac.22.gmail; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) X-Received: by 10.112.51.77 with SMTP id i13mr9682012lbo.12.1413559942262; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com. [2a00:1450:4010:c04::236]) by mx.google.com with ESMTPS id kv1si2581308lac.53.2014.10.17.08.32.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Oct 2014 08:32:22 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::236 as permitted sender) client-ip=2a00:1450:4010:c04::236; Received: by mail-lb0-f182.google.com with SMTP id z11so860980lbi.41 for ; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) X-Received: by 10.112.221.197 with SMTP id qg5mr9586857lbc.32.1413559942135; Fri, 17 Oct 2014 08:32:22 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp249832lbz; Fri, 17 Oct 2014 08:32:21 -0700 (PDT) X-Received: by 10.66.170.46 with SMTP id aj14mr9527315pac.68.1413559940431; Fri, 17 Oct 2014 08:32:20 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id se5si1513662pbc.27.2014.10.17.08.32.19 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Oct 2014 08:32:20 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-return-53595-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 16691 invoked by alias); 17 Oct 2014 15:31:38 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 16512 invoked by uid 89); 17 Oct 2014 15:31:37 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wg0-f42.google.com X-Received: by 10.180.104.7 with SMTP id ga7mr704935wib.1.1413559891325; Fri, 17 Oct 2014 08:31:31 -0700 (PDT) From: Will Newton To: libc-alpha@sourceware.org Subject: [PATCH 2/5] sysdeps/generic/atomic.h: Add a generic atomic header Date: Fri, 17 Oct 2014 16:31:19 +0100 Message-Id: <1413559882-959-3-git-send-email-will.newton@linaro.org> In-Reply-To: <1413559882-959-1-git-send-email-will.newton@linaro.org> References: <1413559882-959-1-git-send-email-will.newton@linaro.org> X-Original-Sender: will.newton@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::236 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 Add a new header that uses modern GCC instrinsics for implementing the atomic_* and catomic_* functions in a relatively efficient manner. This code is based on the existing ARM and AArch64 implementations. ChangeLog: 2014-10-15 Will Newton * sysdeps/generic/atomic.h: New file. --- sysdeps/generic/atomic.h | 240 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 240 insertions(+) create mode 100644 sysdeps/generic/atomic.h diff --git a/sysdeps/generic/atomic.h b/sysdeps/generic/atomic.h new file mode 100644 index 0000000..dd1854e --- /dev/null +++ b/sysdeps/generic/atomic.h @@ -0,0 +1,240 @@ +/* Atomic operations. Generic GCC intrinsic version. + Copyright (C) 2002-2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +void __atomic_link_error (void); + +/* Barrier macro. */ +#ifndef atomic_full_barrier +# define atomic_full_barrier() __sync_synchronize() +#endif + +/* Compare and exchange. + For all "bool" routines, we return FALSE if exchange successful. */ + +#define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) + +#define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) + +#define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + }) +#else +# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ + ({ __atomic_link_error (); 0; }) +#endif + +#define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +#define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +#define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ + ({ \ + typeof (*mem) __oldval = (oldval); \ + __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ + model, __ATOMIC_RELAXED); \ + __oldval; \ + }) +#else +# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ + ({ __atomic_link_error (); (__typeof (*(mem)))0; }) +#endif + + +/* Compare and exchange with "acquire" semantics, ie barrier after. */ + +#define atomic_compare_and_exchange_bool_acq(mem, new, old) \ + __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ + mem, new, old, __ATOMIC_ACQUIRE) + +#define atomic_compare_and_exchange_val_acq(mem, new, old) \ + __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ + mem, new, old, __ATOMIC_ACQUIRE) + +/* Compare and exchange with "release" semantics, ie barrier before. */ + +#define atomic_compare_and_exchange_bool_rel(mem, new, old) \ + __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ + mem, new, old, __ATOMIC_RELEASE) + +#define atomic_compare_and_exchange_val_rel(mem, new, old) \ + __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ + mem, new, old, __ATOMIC_RELEASE) + + +/* Atomic exchange (without compare). */ + +#define __arch_exchange_8_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) + +#define __arch_exchange_16_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) + +#define __arch_exchange_32_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_exchange_64_int(mem, newval, model) \ + __atomic_exchange_n (mem, newval, model) +#else +# define __arch_exchange_64_int(mem, newval, model) \ + ({ __atomic_link_error (); (__typeof (*(mem)))0; }) +#endif + +#define atomic_exchange_acq(mem, value) \ + __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) + +#define atomic_exchange_rel(mem, value) \ + __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) + + +/* Atomically add value and return the previous (unincremented) value. */ + +#define __arch_exchange_and_add_8_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +#define __arch_exchange_and_add_16_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +#define __arch_exchange_and_add_32_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_exchange_and_add_64_int(mem, value, model) \ + __atomic_fetch_add (mem, value, model) +#else +# define __arch_exchange_and_add_64_int(mem, value, model) \ + ({ __atomic_link_error (); (__typeof (*(mem)))0; }) +#endif + +#define atomic_exchange_and_add_acq(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ + __ATOMIC_ACQUIRE) + +#define atomic_exchange_and_add_rel(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ + __ATOMIC_RELEASE) + +#define atomic_exchange_and_add_relaxed(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ + __ATOMIC_RELAXED) + +#define catomic_exchange_and_add atomic_exchange_and_add_acq + +/* Atomically bitwise and value and return the previous value. */ + +#define __arch_exchange_and_and_8_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +#define __arch_exchange_and_and_16_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +#define __arch_exchange_and_and_32_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_exchange_and_and_64_int(mem, value, model) \ + __atomic_fetch_and (mem, value, model) +#else +# define __arch_exchange_and_and_64_int(mem, value, model) \ + ({ __atomic_link_error (); (__typeof (*(mem)))0; }) +#endif + +#define atomic_and(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_and, int, mem, value, \ + __ATOMIC_ACQUIRE) + +#define atomic_and_relaxed(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_and, int, mem, value, \ + __ATOMIC_RELAXED) + +#define atomic_and_val atomic_and + +#define catomic_and atomic_and + +/* Atomically bitwise or value and return the previous value. */ + +#define __arch_exchange_and_or_8_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +#define __arch_exchange_and_or_16_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +#define __arch_exchange_and_or_32_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) + +#if __ARCH_ATOMIC_64_SUPPORTED +# define __arch_exchange_and_or_64_int(mem, value, model) \ + __atomic_fetch_or (mem, value, model) +#else +# define __arch_exchange_and_or_64_int(mem, value, model) \ + ({ __atomic_link_error (); (__typeof (*(mem)))0; }) +#endif + +#define atomic_or(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_or, int, mem, value, \ + __ATOMIC_ACQUIRE) + +#define atomic_or_relaxed(mem, value) \ + __atomic_val_bysize (__arch_exchange_and_or, int, mem, value, \ + __ATOMIC_RELAXED) + +#define atomic_or_val atomic_or + +#define catomic_or atomic_or