From patchwork Fri Jul 26 16:28:53 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: vkamensky X-Patchwork-Id: 18611 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f199.google.com (mail-ye0-f199.google.com [209.85.213.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1611425DFF for ; Fri, 26 Jul 2013 16:29:51 +0000 (UTC) Received: by mail-ye0-f199.google.com with SMTP id l12sf1854737yen.6 for ; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=nTkXUv6pRZmm++hfsKjufmsACPSwjm6HTCKcn1YmcAQ=; b=PhKbNgoYExGfyiW5Q/DgXZgyXHh8uyYKhXqa7qJeQFkob0/vCrt+QQQhBX+ahBAFng PYL9v3r8IjK/ZaBqOmbTkVDhrQjlEIiqIhmC6H8Ue3nJWQcS7HnzGGq68jf6yBjCmIR0 R/7A+TUDgADN0bnrbRA+eT0Yj+xAW5ujAuHR33v+iwUK+JUS2170wcVhq8oBPLwT/QD8 mo5CSJNCSVqbErzl46BmK3KWkxI7wJ+Ni7TlvZ6ORmow6LokXpXFc1mUTL5H7UZPGtQz XJiGwraooWANriAV0qgd00kWmRZcFyVrzLRsudYLhYk+KiHMO1Jl+FmPwJsOQn4bmTrf Wdww== X-Received: by 10.236.148.33 with SMTP id u21mr1451156yhj.37.1374856190744; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.26.70 with SMTP id j6ls1158795qeg.94.gmail; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) X-Received: by 10.52.227.69 with SMTP id ry5mr6469731vdc.83.1374856190601; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx.google.com with ESMTPS id cz7si14301142veb.71.2013.07.26.09.29.50 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Jul 2013 09:29:50 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.181; Received: by mail-vc0-f181.google.com with SMTP id hz10so644363vcb.12 for ; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) X-Received: by 10.221.16.200 with SMTP id pz8mr1273250vcb.53.1374856190511; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.165.8 with SMTP id yu8csp32021veb; Fri, 26 Jul 2013 09:29:50 -0700 (PDT) X-Received: by 10.66.248.164 with SMTP id yn4mr56481300pac.153.1374856189575; Fri, 26 Jul 2013 09:29:49 -0700 (PDT) Received: from mail-pb0-f51.google.com (mail-pb0-f51.google.com [209.85.160.51]) by mx.google.com with ESMTPS id mp5si35041291pbc.273.2013.07.26.09.29.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Jul 2013 09:29:49 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.160.51 is neither permitted nor denied by best guess record for domain of victor.kamensky@linaro.org) client-ip=209.85.160.51; Received: by mail-pb0-f51.google.com with SMTP id um15so2202264pbc.38 for ; Fri, 26 Jul 2013 09:29:49 -0700 (PDT) X-Received: by 10.68.224.228 with SMTP id rf4mr54860158pbc.50.1374856189218; Fri, 26 Jul 2013 09:29:49 -0700 (PDT) Received: from kamensky-w530.cisco.com.com (128-107-239-233.cisco.com. [128.107.239.233]) by mx.google.com with ESMTPSA id nj9sm16510012pbc.13.2013.07.26.09.29.47 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 26 Jul 2013 09:29:48 -0700 (PDT) From: Victor Kamensky To: ben.dooks@codethink.co.uk, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com Cc: patches@linaro.org, linaro-kernel@lists.linaro.org, Victor Kamensky Subject: [PATCH 1/1] ARM: atomic64: fix endian-ness in atomic.h Date: Fri, 26 Jul 2013 09:28:53 -0700 Message-Id: <1374856133-2939-2-git-send-email-victor.kamensky@linaro.org> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1374856133-2939-1-git-send-email-victor.kamensky@linaro.org> References: <1374856133-2939-1-git-send-email-victor.kamensky@linaro.org> X-Gm-Message-State: ALoCoQmc/7Q4bFzPY+Zc9EYrgAvxGm8hfWkBZsyn5vOhQ/1BDey2FRvzNkvs5rFvWzHIo9F5qPYQ X-Original-Sender: victor.kamensky@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Fix inline asm for atomic64_xxx functions in arm atomic.h. Instead of %H operand specifiers code should use %Q for least significant part of the value, and %R for the most significant part of the value. %H always returns the higher of the two register numbers, and therefore it is not endian neutral. %H should be used with ldrexd and strexd instructions. Signed-off-by: Victor Kamensky Acked-by: Will Deacon --- arch/arm/include/asm/atomic.h | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index da1c77d..6447a0b 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -301,8 +301,8 @@ static inline void atomic64_add(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_add\n" "1: ldrexd %0, %H0, [%3]\n" -" adds %0, %0, %4\n" -" adc %H0, %H0, %H4\n" +" adds %Q0, %Q0, %Q4\n" +" adc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -320,8 +320,8 @@ static inline u64 atomic64_add_return(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_add_return\n" "1: ldrexd %0, %H0, [%3]\n" -" adds %0, %0, %4\n" -" adc %H0, %H0, %H4\n" +" adds %Q0, %Q0, %Q4\n" +" adc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -341,8 +341,8 @@ static inline void atomic64_sub(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_sub\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, %4\n" -" sbc %H0, %H0, %H4\n" +" subs %Q0, %Q0, %Q4\n" +" sbc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -360,8 +360,8 @@ static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_sub_return\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, %4\n" -" sbc %H0, %H0, %H4\n" +" subs %Q0, %Q0, %Q4\n" +" sbc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -428,9 +428,9 @@ static inline u64 atomic64_dec_if_positive(atomic64_t *v) __asm__ __volatile__("@ atomic64_dec_if_positive\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, #1\n" -" sbc %H0, %H0, #0\n" -" teq %H0, #0\n" +" subs %Q0, %Q0, #1\n" +" sbc %R0, %R0, #0\n" +" teq %R0, #0\n" " bmi 2f\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" @@ -459,8 +459,8 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) " teqeq %H0, %H5\n" " moveq %1, #0\n" " beq 2f\n" -" adds %0, %0, %6\n" -" adc %H0, %H0, %H6\n" +" adds %Q0, %Q0, %Q6\n" +" adc %R0, %R0, %R6\n" " strexd %2, %0, %H0, [%4]\n" " teq %2, #0\n" " bne 1b\n"