From patchwork Wed Dec 17 06:22:45 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sheng Yong X-Patchwork-Id: 42369 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8564526C90 for ; Wed, 17 Dec 2014 06:26:38 +0000 (UTC) Received: by mail-la0-f71.google.com with SMTP id q1sf9589735lam.10 for ; Tue, 16 Dec 2014 22:26:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type:sender:precedence :list-id:x-original-sender:x-original-authentication-results :mailing-list:list-post:list-help:list-archive:list-unsubscribe; bh=//EJlrIuIeNnNx1yXDLf1ZOMx1W+O9qIetvyg0O1PCw=; b=KHRMKxqDJbB3OtYRlzv8M00vs6ewXrpZVMscFRhUIn5FJ5wG6ZgbXGwhn614ZYrUfW mUqeohXMYAyeIwGX22KBYgrPmu6CpySl8dCu0GBmW+kGcFrvGRKmIiQygTBY+/fI8mw1 jjDzVexrWDvYfD2Lw1wKSEn9fdMfQP64JM++n8Ma5yQ54AEn8Blk/pP0riAqYy6tUnux RvW1YdqXSXRJhpWWEejTq1FESNkYmwDz0iJhtWzCkNQ0yO+bKBSN5xLHVNs9QwdIfuKz 81hb4Ntz1cGh6EZuemvNsBsy894S+SxKHv0vn1T7IlwVYjZoIrorJqz5Qst6ahXh2E4h PFog== X-Gm-Message-State: ALoCoQk8ycKAr2+xHqUu7nqCg4JbEFO4VOuad+DcaY0+1i3QdIerSnS2JgRO6cM4T/lqUBTlxzW3 X-Received: by 10.180.85.72 with SMTP id f8mr1047135wiz.0.1418797597438; Tue, 16 Dec 2014 22:26:37 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.97 with SMTP id r1ls1095065lar.0.gmail; Tue, 16 Dec 2014 22:26:37 -0800 (PST) X-Received: by 10.152.203.201 with SMTP id ks9mr25395541lac.57.1418797597243; Tue, 16 Dec 2014 22:26:37 -0800 (PST) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com. [209.85.217.172]) by mx.google.com with ESMTPS id d2si2865769laf.39.2014.12.16.22.26.37 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 16 Dec 2014 22:26:37 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) client-ip=209.85.217.172; Received: by mail-lb0-f172.google.com with SMTP id u10so12316277lbd.3 for ; Tue, 16 Dec 2014 22:26:37 -0800 (PST) X-Received: by 10.112.135.229 with SMTP id pv5mr39327965lbb.52.1418797597162; Tue, 16 Dec 2014 22:26:37 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.142.69 with SMTP id ru5csp1261813lbb; Tue, 16 Dec 2014 22:26:36 -0800 (PST) X-Received: by 10.70.35.207 with SMTP id k15mr66326139pdj.166.1418797582863; Tue, 16 Dec 2014 22:26:22 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id jv1si4011312pbc.205.2014.12.16.22.26.22; Tue, 16 Dec 2014 22:26:22 -0800 (PST) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751446AbaLQG0T (ORCPT + 1 other); Wed, 17 Dec 2014 01:26:19 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:13629 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751640AbaLQG0O (ORCPT ); Wed, 17 Dec 2014 01:26:14 -0500 Received: from 172.24.2.119 (EHLO szxeml421-hub.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CEC15564; Wed, 17 Dec 2014 14:26:12 +0800 (CST) Received: from linux-4hy3.site (10.107.197.200) by szxeml421-hub.china.huawei.com (10.82.67.160) with Microsoft SMTP Server id 14.3.158.1; Wed, 17 Dec 2014 14:26:00 +0800 From: Sheng Yong To: CC: , Subject: [PATCH 02/16] ARM: atomic64: fix endian-ness in atomic.h Date: Wed, 17 Dec 2014 06:22:45 +0000 Message-ID: <1418797379-107848-3-git-send-email-shengyong1@huawei.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1418797379-107848-1-git-send-email-shengyong1@huawei.com> References: <1418797379-107848-1-git-send-email-shengyong1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.200] X-CFilter-Loop: Reflected Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Victor Kamensky commit 2245f92498b216b50e744423bde17626287409d8 upstream Fix inline asm for atomic64_xxx functions in arm atomic.h. Instead of %H operand specifiers code should use %Q for least significant part of the value, and %R for the most significant part of the value. %H always returns the higher of the two register numbers, and therefore it is not endian neutral. %H should be used with ldrexd and strexd instructions. Signed-off-by: Victor Kamensky Acked-by: Will Deacon Signed-off-by: Ben Dooks --- arch/arm/include/asm/atomic.h | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index da1c77d..6447a0b 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -301,8 +301,8 @@ static inline void atomic64_add(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_add\n" "1: ldrexd %0, %H0, [%3]\n" -" adds %0, %0, %4\n" -" adc %H0, %H0, %H4\n" +" adds %Q0, %Q0, %Q4\n" +" adc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -320,8 +320,8 @@ static inline u64 atomic64_add_return(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_add_return\n" "1: ldrexd %0, %H0, [%3]\n" -" adds %0, %0, %4\n" -" adc %H0, %H0, %H4\n" +" adds %Q0, %Q0, %Q4\n" +" adc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -341,8 +341,8 @@ static inline void atomic64_sub(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_sub\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, %4\n" -" sbc %H0, %H0, %H4\n" +" subs %Q0, %Q0, %Q4\n" +" sbc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -360,8 +360,8 @@ static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) __asm__ __volatile__("@ atomic64_sub_return\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, %4\n" -" sbc %H0, %H0, %H4\n" +" subs %Q0, %Q0, %Q4\n" +" sbc %R0, %R0, %R4\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" " bne 1b" @@ -428,9 +428,9 @@ static inline u64 atomic64_dec_if_positive(atomic64_t *v) __asm__ __volatile__("@ atomic64_dec_if_positive\n" "1: ldrexd %0, %H0, [%3]\n" -" subs %0, %0, #1\n" -" sbc %H0, %H0, #0\n" -" teq %H0, #0\n" +" subs %Q0, %Q0, #1\n" +" sbc %R0, %R0, #0\n" +" teq %R0, #0\n" " bmi 2f\n" " strexd %1, %0, %H0, [%3]\n" " teq %1, #0\n" @@ -459,8 +459,8 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) " teqeq %H0, %H5\n" " moveq %1, #0\n" " beq 2f\n" -" adds %0, %0, %6\n" -" adc %H0, %H0, %H6\n" +" adds %Q0, %Q0, %Q6\n" +" adc %R0, %R0, %R6\n" " strexd %2, %0, %H0, [%4]\n" " teq %2, #0\n" " bne 1b\n"