From patchwork Mon Oct 27 07:59:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 39594 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f69.google.com (mail-ee0-f69.google.com [74.125.83.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C04952118A for ; Mon, 27 Oct 2014 08:05:49 +0000 (UTC) Received: by mail-ee0-f69.google.com with SMTP id c13sf922632eek.4 for ; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-subscribe:list-archive :list-post:list-help:sender:delivered-to:from:to:cc:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results; bh=dgyZyLIMYrltMZ12zntfzKU/GPy0UGH5ldJsPgO0zB0=; b=CbhfD3A9NHzAO00+oB96WHfs9HmbxKFTQHsfTyA6exwvz0AsEDfH207nuXZZ+hChYG 1wun0XxdsJXUPz4scgu1bA65K2MH5mflv432LvzfoI2TFC8VrM/FYXHzoLJsLr+CWF4D shRjG10DQMKADH995ABO2JLzZCACMiDbtdhMVHaKK0Lem/15Q7BAJJ7IY6S+VnXl7OS1 T5WCnqXqthLPbq2ITudDTuOY5zPZhjuil/RBm3cK1RdtpDv7NrlnBW6rLMl9Xtf9h+Lr 2v1bYuiPDwweacB65ImMw9l4cAVpTou95b7OpV6gZY2h/WC8EPbcqhnfhliZeMT7NlVW CdmA== X-Gm-Message-State: ALoCoQmglAExVxMmTUYFD+cCsvlY44eTBlKQdI3rTMaNbicTxoUhhHfvTr0/90K9GSFMDBuD6x2Q X-Received: by 10.180.90.115 with SMTP id bv19mr3591735wib.1.1414397148896; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.202.230 with SMTP id kl6ls427005lac.99.gmail; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) X-Received: by 10.112.224.162 with SMTP id rd2mr1352655lbc.95.1414397148616; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) Received: from mail-la0-x234.google.com (mail-la0-x234.google.com. [2a00:1450:4010:c03::234]) by mx.google.com with ESMTPS id ao5si18693741lbc.58.2014.10.27.01.05.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 27 Oct 2014 01:05:48 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::234 as permitted sender) client-ip=2a00:1450:4010:c03::234; Received: by mail-la0-f52.google.com with SMTP id hz20so5414431lab.11 for ; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) X-Received: by 10.152.5.38 with SMTP id p6mr21539447lap.44.1414397148373; Mon, 27 Oct 2014 01:05:48 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp247866lbz; Mon, 27 Oct 2014 01:05:47 -0700 (PDT) X-Received: by 10.70.65.68 with SMTP id v4mr258610pds.156.1414397146771; Mon, 27 Oct 2014 01:05:46 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id xq2si5761283pab.143.2014.10.27.01.05.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Oct 2014 01:05:46 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-return-53826-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 31688 invoked by alias); 27 Oct 2014 08:03:27 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 31609 invoked by uid 89); 27 Oct 2014 08:03:26 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.2 X-HELO: mail-ie0-f171.google.com X-Received: by 10.107.11.129 with SMTP id 1mr9346104iol.18.1414397002749; Mon, 27 Oct 2014 01:03:22 -0700 (PDT) From: Andrew Pinski To: libc-alpha@sourceware.org Cc: Andrew Pinski Subject: [PATCH 17/29] [AARCH64] Syscalls for ILP32 are passed always via 64bit values. Date: Mon, 27 Oct 2014 00:59:41 -0700 Message-Id: <1414396793-9005-18-git-send-email-apinski@cavium.com> In-Reply-To: <1414396793-9005-1-git-send-email-apinski@cavium.com> References: <1414396793-9005-1-git-send-email-apinski@cavium.com> X-Original-Sender: apinski@cavium.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::234 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 This patch adds support for ILP32 syscalls, sign and zero extending where needed. Unlike LP64, pointers are 32bit and need to be zero extended rather than the standard sign extend that the code would do. We take advatage of ssize_t being long rather than int for ILP32, to get this correct. * sysdeps/unix/sysv/linux/aarch64/sysdep.h (INLINE_VSYSCALL): Use long long instead of long. (INTERNAL_VSYSCALL): Likewise. (INLINE_SYSCALL): Likewise. (INTERNAL_SYSCALL_RAW): Likewise. (ARGIFY): New macro. (LOAD_ARGS_0): Use long long instead of long. (LOAD_ARGS_1): Use long long instead of long and use ARGIFY. (LOAD_ARGS_2): Likewise. (LOAD_ARGS_3): Likewise. (LOAD_ARGS_4): Likewise. (LOAD_ARGS_5): Likewise. (LOAD_ARGS_6): Likewise. (LOAD_ARGS_7): Likewise. --- sysdeps/unix/sysv/linux/aarch64/sysdep.h | 52 ++++++++++++++++++----------- 1 files changed, 32 insertions(+), 20 deletions(-) diff --git a/sysdeps/unix/sysv/linux/aarch64/sysdep.h b/sysdeps/unix/sysv/linux/aarch64/sysdep.h index fc31661..0d9fa8a 100644 --- a/sysdeps/unix/sysv/linux/aarch64/sysdep.h +++ b/sysdeps/unix/sysv/linux/aarch64/sysdep.h @@ -156,7 +156,7 @@ ({ \ __label__ out; \ __label__ iserr; \ - long sc_ret; \ + long long sc_ret; \ INTERNAL_SYSCALL_DECL (sc_err); \ \ if (__vdso_##name != NULL) \ @@ -187,7 +187,7 @@ # define INTERNAL_VSYSCALL(name, err, nr, args...) \ ({ \ __label__ out; \ - long v_ret; \ + long long v_ret; \ \ if (__vdso_##name != NULL) \ { \ @@ -224,11 +224,11 @@ call. */ # undef INLINE_SYSCALL # define INLINE_SYSCALL(name, nr, args...) \ - ({ unsigned long _sys_result = INTERNAL_SYSCALL (name, , nr, args); \ + ({ unsigned long long _sys_result = INTERNAL_SYSCALL (name, , nr, args); \ if (__builtin_expect (INTERNAL_SYSCALL_ERROR_P (_sys_result, ), 0))\ { \ __set_errno (INTERNAL_SYSCALL_ERRNO (_sys_result, )); \ - _sys_result = (unsigned long) -1; \ + _sys_result = (unsigned long long) -1; \ } \ (long) _sys_result; }) @@ -237,10 +237,10 @@ # undef INTERNAL_SYSCALL_RAW # define INTERNAL_SYSCALL_RAW(name, err, nr, args...) \ - ({ long _sys_result; \ + ({ long long _sys_result; \ { \ LOAD_ARGS_##nr (args) \ - register long _x8 asm ("x8") = (name); \ + register long long _x8 asm ("x8") = (name); \ asm volatile ("svc 0 // syscall " # name \ : "=r" (_x0) : "r"(_x8) ASM_ARGS_##nr : "memory"); \ _sys_result = _x0; \ @@ -262,36 +262,48 @@ # undef INTERNAL_SYSCALL_ERRNO # define INTERNAL_SYSCALL_ERRNO(val, err) (-(val)) +/* Convert X to a long long, without losing any bits if it is one + already or warning if it is a 32-bit pointer. This zero extends + 32-bit pointers and sign extends other signed types. Note this only + works because ssize_t is long and short-short is promoted to int. */ +#define ARGIFY(X) \ + ((unsigned long long) \ + __builtin_choose_expr(__builtin_types_compatible_p(__typeof__(X), __typeof__((X) - (X))), \ + (X), \ + __builtin_choose_expr(__builtin_types_compatible_p(int, __typeof__((X) - (X))), \ + (X), \ + (unsigned long)(X)))) + # define LOAD_ARGS_0() \ - register long _x0 asm ("x0"); + register long long _x0 asm ("x0"); # define LOAD_ARGS_1(x0) \ - long _x0tmp = (long) (x0); \ + long long _x0tmp = ARGIFY (x0); \ LOAD_ARGS_0 () \ _x0 = _x0tmp; # define LOAD_ARGS_2(x0, x1) \ - long _x1tmp = (long) (x1); \ + long long _x1tmp = ARGIFY (x1); \ LOAD_ARGS_1 (x0) \ - register long _x1 asm ("x1") = _x1tmp; + register long long _x1 asm ("x1") = _x1tmp; # define LOAD_ARGS_3(x0, x1, x2) \ - long _x2tmp = (long) (x2); \ + long long _x2tmp = ARGIFY (x2); \ LOAD_ARGS_2 (x0, x1) \ - register long _x2 asm ("x2") = _x2tmp; + register long long _x2 asm ("x2") = _x2tmp; # define LOAD_ARGS_4(x0, x1, x2, x3) \ - long _x3tmp = (long) (x3); \ + long long _x3tmp = ARGIFY (x3); \ LOAD_ARGS_3 (x0, x1, x2) \ - register long _x3 asm ("x3") = _x3tmp; + register long long _x3 asm ("x3") = _x3tmp; # define LOAD_ARGS_5(x0, x1, x2, x3, x4) \ - long _x4tmp = (long) (x4); \ + long long _x4tmp = ARGIFY (x4); \ LOAD_ARGS_4 (x0, x1, x2, x3) \ - register long _x4 asm ("x4") = _x4tmp; + register long long _x4 asm ("x4") = _x4tmp; # define LOAD_ARGS_6(x0, x1, x2, x3, x4, x5) \ - long _x5tmp = (long) (x5); \ + long long _x5tmp = ARGIFY (x5); \ LOAD_ARGS_5 (x0, x1, x2, x3, x4) \ - register long _x5 asm ("x5") = _x5tmp; + register long long _x5 asm ("x5") = _x5tmp; # define LOAD_ARGS_7(x0, x1, x2, x3, x4, x5, x6)\ - long _x6tmp = (long) (x6); \ + long long _x6tmp = ARGIFY (x6); \ LOAD_ARGS_6 (x0, x1, x2, x3, x4, x5) \ - register long _x6 asm ("x6") = _x6tmp; + register long long _x6 asm ("x6") = _x6tmp; # define ASM_ARGS_0 # define ASM_ARGS_1 , "r" (_x0)