From patchwork Wed Mar 26 13:38:51 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Campbell X-Patchwork-Id: 27135 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CC81A206F9 for ; Wed, 26 Mar 2014 13:45:10 +0000 (UTC) Received: by mail-la0-f69.google.com with SMTP id y1sf3598535lam.8 for ; Wed, 26 Mar 2014 06:45:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-post:list-help:list-subscribe:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:list-archive:content-type:content-transfer-encoding; bh=d1AyoxatZa8uk3t/mh0wScHbh0OQ2W00KcypETRIc6U=; b=Bj9Ffbp/YkJILNDUmaYoonOWq/YjcwA2bDZ3whLd2MRaXkYQO4YfrLQ+G0pE/G8zTp h98Bey0M+S3GKQRQvQLDEqqAk+IPsHrzIvjYFW+LMb/Q49eNOSuQEdy9oywyQjto34On p9GAuLvaVm9t4t3m2f4umiM120/d8iKIsFnG24MzEdYPj1kv8wixzlS/Ni354H0GCuhk +XYzdxk6NQGFmtdSKTDUydVbzEW6qVM7tWcN+IG0fvuaEovTuafvZ8uc7lapzB97CTRR DIyaNcMBypBJ6D3Ci9tsvMxLucfFsn0m+TvEvqqK1ZXoo3jv6xsjaXF9lT0xpT8MbUuu R1pw== X-Gm-Message-State: ALoCoQnCyjV6gkEzexYET3KlxxTRHMwk64kghDfEGUWixR3ibrLcUizoHfAmB8kmJ/wuFw7d1c3A X-Received: by 10.15.82.2 with SMTP id z2mr20602168eey.6.1395841509050; Wed, 26 Mar 2014 06:45:09 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.21.40 with SMTP id 37ls627976qgk.32.gmail; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) X-Received: by 10.58.69.20 with SMTP id a20mr80779veu.63.1395841508600; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id sv2si4628134vdc.129.2014.03.26.06.45.08 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 26 Mar 2014 06:45:08 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id lg15so2369067vcb.2 for ; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) X-Received: by 10.220.109.1 with SMTP id h1mr15438264vcp.20.1395841508499; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp47456vck; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) X-Received: by 10.52.137.74 with SMTP id qg10mr77119vdb.61.1395841508054; Wed, 26 Mar 2014 06:45:08 -0700 (PDT) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id tc5si366754vcb.107.2014.03.26.06.45.07 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 26 Mar 2014 06:45:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WSo7R-0006bt-8I; Wed, 26 Mar 2014 13:43:49 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WSo7P-0006ag-ET for xen-devel@lists.xen.org; Wed, 26 Mar 2014 13:43:47 +0000 Received: from [85.158.139.211:17102] by server-13.bemta-5.messagelabs.com id 64/00-16341-299D2335; Wed, 26 Mar 2014 13:43:46 +0000 X-Env-Sender: Ian.Campbell@citrix.com X-Msg-Ref: server-11.tower-206.messagelabs.com!1395841424!1718714!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n X-StarScan-Received: X-StarScan-Version: 6.11.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 6041 invoked from network); 26 Mar 2014 13:43:45 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 26 Mar 2014 13:43:45 -0000 X-IronPort-AV: E=Sophos;i="4.97,735,1389744000"; d="scan'208";a="113734890" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 26 Mar 2014 13:43:11 +0000 Received: from norwich.cam.xci-test.com (10.80.248.129) by smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 26 Mar 2014 09:43:10 -0400 Received: from drall.uk.xensource.com ([10.80.16.71] helo=drall.uk.xensource.com.) by norwich.cam.xci-test.com with esmtp (Exim 4.72) (envelope-from ) id 1WSo2g-00074X-AO; Wed, 26 Mar 2014 13:38:54 +0000 From: Ian Campbell To: Date: Wed, 26 Mar 2014 13:38:51 +0000 Message-ID: <1395841133-2223-16-git-send-email-ian.campbell@citrix.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1395841009.12547.11.camel@kazak.uk.xensource.com> References: <1395841009.12547.11.camel@kazak.uk.xensource.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: julien.grall@linaro.org, tim@xen.org, Ian Campbell , stefano.stabellini@eu.citrix.com Subject: [Xen-devel] [PATCH v2 16/17] xen: arm: refactor xchg and cmpxchg into their own headers X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ian.campbell@citrix.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: Since these functions are taken from Linux this makes it easier to compare against the Lihnux cmpxchg.h headers (which were split out from Linux's system.h a while back). Since these functions are from Linux the intention is to use Linux coding style, therefore include a suitable emacs magic block. For this reason also fix up the indentation in the 32-bit version to use hard tabs while moving it. The 64-bit version was already correct. Signed-off-by: Ian Campbell Acked-by: Julien Grall --- xen/include/asm-arm/arm32/cmpxchg.h | 146 ++++++++++++++++++++++++++++++ xen/include/asm-arm/arm32/system.h | 135 +--------------------------- xen/include/asm-arm/arm64/cmpxchg.h | 167 +++++++++++++++++++++++++++++++++++ xen/include/asm-arm/arm64/system.h | 155 +------------------------------- 4 files changed, 315 insertions(+), 288 deletions(-) create mode 100644 xen/include/asm-arm/arm32/cmpxchg.h create mode 100644 xen/include/asm-arm/arm64/cmpxchg.h diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h new file mode 100644 index 0000000..70c6090 --- /dev/null +++ b/xen/include/asm-arm/arm32/cmpxchg.h @@ -0,0 +1,146 @@ +#ifndef __ASM_ARM32_CMPXCHG_H +#define __ASM_ARM32_CMPXCHG_H + +extern void __bad_xchg(volatile void *, int); + +static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) +{ + unsigned long ret; + unsigned int tmp; + + smp_mb(); + + switch (size) { + case 1: + asm volatile("@ __xchg1\n" + "1: ldrexb %0, [%3]\n" + " strexb %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=&r" (ret), "=&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + case 4: + asm volatile("@ __xchg4\n" + "1: ldrex %0, [%3]\n" + " strex %1, %2, [%3]\n" + " teq %1, #0\n" + " bne 1b" + : "=&r" (ret), "=&r" (tmp) + : "r" (x), "r" (ptr) + : "memory", "cc"); + break; + default: + __bad_xchg(ptr, size), ret = 0; + break; + } + smp_mb(); + + return ret; +} + +/* + * Atomic compare and exchange. Compare OLD with MEM, if identical, + * store NEW in MEM. Return the initial value in MEM. Success is + * indicated by comparing RETURN with OLD. + */ + +extern void __bad_cmpxchg(volatile void *ptr, int size); + +static always_inline unsigned long __cmpxchg( + volatile void *ptr, unsigned long old, unsigned long new, int size) +{ + unsigned long oldval, res; + + switch (size) { + case 1: + do { + asm volatile("@ __cmpxchg1\n" + " ldrexb %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexbeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 2: + do { + asm volatile("@ __cmpxchg2\n" + " ldrexh %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexheq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; + case 4: + do { + asm volatile("@ __cmpxchg4\n" + " ldrex %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; +#if 0 + case 8: + do { + asm volatile("@ __cmpxchg8\n" + " ldrexd %1, [%2]\n" + " mov %0, #0\n" + " teq %1, %3\n" + " strexdeq %0, %4, [%2]\n" + : "=&r" (res), "=&r" (oldval) + : "r" (ptr), "Ir" (old), "r" (new) + : "memory", "cc"); + } while (res); + break; +#endif + default: + __bad_cmpxchg(ptr, size); + oldval = 0; + } + + return oldval; +} + +static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, + unsigned long new, int size) +{ + unsigned long ret; + + smp_mb(); + ret = __cmpxchg(ptr, old, new, size); + smp_mb(); + + return ret; +} + +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) + +#define cmpxchg_local(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) +#endif +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 8 + * indent-tabs-mode: t + * End: + */ diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h index dfaa3b6..b47b942 100644 --- a/xen/include/asm-arm/arm32/system.h +++ b/xen/include/asm-arm/arm32/system.h @@ -2,140 +2,7 @@ #ifndef __ASM_ARM32_SYSTEM_H #define __ASM_ARM32_SYSTEM_H -extern void __bad_xchg(volatile void *, int); - -static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) -{ - unsigned long ret; - unsigned int tmp; - - smp_mb(); - - switch (size) { - case 1: - asm volatile("@ __xchg1\n" - "1: ldrexb %0, [%3]\n" - " strexb %1, %2, [%3]\n" - " teq %1, #0\n" - " bne 1b" - : "=&r" (ret), "=&r" (tmp) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; - case 4: - asm volatile("@ __xchg4\n" - "1: ldrex %0, [%3]\n" - " strex %1, %2, [%3]\n" - " teq %1, #0\n" - " bne 1b" - : "=&r" (ret), "=&r" (tmp) - : "r" (x), "r" (ptr) - : "memory", "cc"); - break; - default: - __bad_xchg(ptr, size), ret = 0; - break; - } - smp_mb(); - - return ret; -} - -/* - * Atomic compare and exchange. Compare OLD with MEM, if identical, - * store NEW in MEM. Return the initial value in MEM. Success is - * indicated by comparing RETURN with OLD. - */ - -extern void __bad_cmpxchg(volatile void *ptr, int size); - -static always_inline unsigned long __cmpxchg( - volatile void *ptr, unsigned long old, unsigned long new, int size) -{ - unsigned long /*long*/ oldval, res; - - switch (size) { - case 1: - do { - asm volatile("@ __cmpxchg1\n" - " ldrexb %1, [%2]\n" - " mov %0, #0\n" - " teq %1, %3\n" - " strexbeq %0, %4, [%2]\n" - : "=&r" (res), "=&r" (oldval) - : "r" (ptr), "Ir" (old), "r" (new) - : "memory", "cc"); - } while (res); - break; - case 2: - do { - asm volatile("@ __cmpxchg2\n" - " ldrexh %1, [%2]\n" - " mov %0, #0\n" - " teq %1, %3\n" - " strexheq %0, %4, [%2]\n" - : "=&r" (res), "=&r" (oldval) - : "r" (ptr), "Ir" (old), "r" (new) - : "memory", "cc"); - } while (res); - break; - case 4: - do { - asm volatile("@ __cmpxchg4\n" - " ldrex %1, [%2]\n" - " mov %0, #0\n" - " teq %1, %3\n" - " strexeq %0, %4, [%2]\n" - : "=&r" (res), "=&r" (oldval) - : "r" (ptr), "Ir" (old), "r" (new) - : "memory", "cc"); - } while (res); - break; -#if 0 - case 8: - do { - asm volatile("@ __cmpxchg8\n" - " ldrexd %1, [%2]\n" - " mov %0, #0\n" - " teq %1, %3\n" - " strexdeq %0, %4, [%2]\n" - : "=&r" (res), "=&r" (oldval) - : "r" (ptr), "Ir" (old), "r" (new) - : "memory", "cc"); - } while (res); - break; -#endif - default: - __bad_cmpxchg(ptr, size); - oldval = 0; - } - - return oldval; -} - -static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, - unsigned long new, int size) -{ - unsigned long ret; - - smp_mb(); - ret = __cmpxchg(ptr, old, new, size); - smp_mb(); - - return ret; -} - -#define cmpxchg(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) - -#define cmpxchg_local(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) +#include #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n" : : : "cc" ) #define local_irq_enable() asm volatile ( "cpsie i @ local_irq_enable\n" : : : "cc" ) diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h new file mode 100644 index 0000000..4e930ce --- /dev/null +++ b/xen/include/asm-arm/arm64/cmpxchg.h @@ -0,0 +1,167 @@ +#ifndef __ASM_ARM64_CMPXCHG_H +#define __ASM_ARM64_CMPXCHG_H + +extern void __bad_xchg(volatile void *, int); + +static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) +{ + unsigned long ret, tmp; + + switch (size) { + case 1: + asm volatile("// __xchg1\n" + "1: ldxrb %w0, %2\n" + " stlxrb %w1, %w3, %2\n" + " cbnz %w1, 1b\n" + : "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr) + : "r" (x) + : "memory"); + break; + case 2: + asm volatile("// __xchg2\n" + "1: ldxrh %w0, %2\n" + " stlxrh %w1, %w3, %2\n" + " cbnz %w1, 1b\n" + : "=&r" (ret), "=&r" (tmp), "+Q" (*(u16 *)ptr) + : "r" (x) + : "memory"); + break; + case 4: + asm volatile("// __xchg4\n" + "1: ldxr %w0, %2\n" + " stlxr %w1, %w3, %2\n" + " cbnz %w1, 1b\n" + : "=&r" (ret), "=&r" (tmp), "+Q" (*(u32 *)ptr) + : "r" (x) + : "memory"); + break; + case 8: + asm volatile("// __xchg8\n" + "1: ldxr %0, %2\n" + " stlxr %w1, %3, %2\n" + " cbnz %w1, 1b\n" + : "=&r" (ret), "=&r" (tmp), "+Q" (*(u64 *)ptr) + : "r" (x) + : "memory"); + break; + default: + __bad_xchg(ptr, size), ret = 0; + break; + } + + smp_mb(); + return ret; +} + +#define xchg(ptr,x) \ + ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) + +extern void __bad_cmpxchg(volatile void *ptr, int size); + +static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, + unsigned long new, int size) +{ + unsigned long oldval = 0, res; + + switch (size) { + case 1: + do { + asm volatile("// __cmpxchg1\n" + " ldxrb %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxrb %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u8 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 2: + do { + asm volatile("// __cmpxchg2\n" + " ldxrh %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxrh %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u16 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 4: + do { + asm volatile("// __cmpxchg4\n" + " ldxr %w1, %2\n" + " mov %w0, #0\n" + " cmp %w1, %w3\n" + " b.ne 1f\n" + " stxr %w0, %w4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u32 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + case 8: + do { + asm volatile("// __cmpxchg8\n" + " ldxr %1, %2\n" + " mov %w0, #0\n" + " cmp %1, %3\n" + " b.ne 1f\n" + " stxr %w0, %4, %2\n" + "1:\n" + : "=&r" (res), "=&r" (oldval), "+Q" (*(u64 *)ptr) + : "Ir" (old), "r" (new) + : "cc"); + } while (res); + break; + + default: + __bad_cmpxchg(ptr, size); + oldval = 0; + } + + return oldval; +} + +static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, + unsigned long new, int size) +{ + unsigned long ret; + + smp_mb(); + ret = __cmpxchg(ptr, old, new, size); + smp_mb(); + + return ret; +} + +#define cmpxchg(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) + +#define cmpxchg_local(ptr,o,n) \ + ((__typeof__(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), \ + (unsigned long)(n), \ + sizeof(*(ptr)))) + +#endif +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 8 + * indent-tabs-mode: t + * End: + */ diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h index fa50ead..6efced3 100644 --- a/xen/include/asm-arm/arm64/system.h +++ b/xen/include/asm-arm/arm64/system.h @@ -2,160 +2,7 @@ #ifndef __ASM_ARM64_SYSTEM_H #define __ASM_ARM64_SYSTEM_H -extern void __bad_xchg(volatile void *, int); - -static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size) -{ - unsigned long ret, tmp; - - switch (size) { - case 1: - asm volatile("// __xchg1\n" - "1: ldxrb %w0, %2\n" - " stlxrb %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr) - : "r" (x) - : "memory"); - break; - case 2: - asm volatile("// __xchg2\n" - "1: ldxrh %w0, %2\n" - " stlxrh %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=&r" (ret), "=&r" (tmp), "+Q" (*(u16 *)ptr) - : "r" (x) - : "memory"); - break; - case 4: - asm volatile("// __xchg4\n" - "1: ldxr %w0, %2\n" - " stlxr %w1, %w3, %2\n" - " cbnz %w1, 1b\n" - : "=&r" (ret), "=&r" (tmp), "+Q" (*(u32 *)ptr) - : "r" (x) - : "memory"); - break; - case 8: - asm volatile("// __xchg8\n" - "1: ldxr %0, %2\n" - " stlxr %w1, %3, %2\n" - " cbnz %w1, 1b\n" - : "=&r" (ret), "=&r" (tmp), "+Q" (*(u64 *)ptr) - : "r" (x) - : "memory"); - break; - default: - __bad_xchg(ptr, size), ret = 0; - break; - } - - smp_mb(); - return ret; -} - -#define xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) - -extern void __bad_cmpxchg(volatile void *ptr, int size); - -static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, - unsigned long new, int size) -{ - unsigned long oldval = 0, res; - - switch (size) { - case 1: - do { - asm volatile("// __cmpxchg1\n" - " ldxrb %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxrb %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u8 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 2: - do { - asm volatile("// __cmpxchg2\n" - " ldxrh %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxrh %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u16 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 4: - do { - asm volatile("// __cmpxchg4\n" - " ldxr %w1, %2\n" - " mov %w0, #0\n" - " cmp %w1, %w3\n" - " b.ne 1f\n" - " stxr %w0, %w4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u32 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - case 8: - do { - asm volatile("// __cmpxchg8\n" - " ldxr %1, %2\n" - " mov %w0, #0\n" - " cmp %1, %3\n" - " b.ne 1f\n" - " stxr %w0, %4, %2\n" - "1:\n" - : "=&r" (res), "=&r" (oldval), "+Q" (*(u64 *)ptr) - : "Ir" (old), "r" (new) - : "cc"); - } while (res); - break; - - default: - __bad_cmpxchg(ptr, size); - oldval = 0; - } - - return oldval; -} - -static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, - unsigned long new, int size) -{ - unsigned long ret; - - smp_mb(); - ret = __cmpxchg(ptr, old, new, size); - smp_mb(); - - return ret; -} - -#define cmpxchg(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) - -#define cmpxchg_local(ptr,o,n) \ - ((__typeof__(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), \ - (unsigned long)(n), \ - sizeof(*(ptr)))) +#include /* Uses uimm4 as a bitmask to select the clearing of one or more of * the DAIF exception mask bits: