From patchwork Mon Feb 10 19:20:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 183253 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp4417187ile; Mon, 10 Feb 2020 11:21:43 -0800 (PST) X-Google-Smtp-Source: APXvYqxGExFcvXqqQeoTQ42Ga4Z4k1l/fmiaPok3l8r3neVbkYKuE1PHtMKmDzqvAjlXBEVwuFaP X-Received: by 2002:aca:aa05:: with SMTP id t5mr375955oie.93.1581362503138; Mon, 10 Feb 2020 11:21:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581362503; cv=none; d=google.com; s=arc-20160816; b=U3EO16fKp9C4Nb6G4akYWiX105fs0MOknmDNBmke+q/spA6hWx7KLmMaTkdrjs8Szm ZR65lirOwcFyK7bckd46fXki2cuixfVuOEOyWtftzQ0V34ioUdHiaJWYFzWhQP+KJSLe R1Q6792ptz7xpe3X4wQxuU/fScY0OaaXlmEJkSErI3jVCiMEPyfGdSo3Uqt//FOw2Fk/ 3RjqEE8BIqM0AnTLEo530SyKia8Kj2lkMpLvqBMdAXmCVYjubEy+1iC8GjQNaZPX7qtw VOKJ4ZVjTSm6Cbp2ugRQMOgvH5+NvofjKgE0MOpajaxPmkjzr32+5mtalQ2cwDEwEG/K wd5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:to:from :dkim-signature:delivered-to:sender:list-help:list-post:list-archive :list-subscribe:list-unsubscribe:list-id:precedence:mailing-list :dkim-signature:domainkey-signature; bh=KKIsM6K+Sw9tL8/EQUjrldtm1V1EpjjLWkg7S/EJcsU=; b=xzFEglS88yxYP7/eNtX0tfRW85F41y7C7+raK5ptV1fSrdt3yPQyzpo594/EaFL/ao GswzlHCs/vCLsNPcQSAIr7AEDXRg7G2eckThusVTN8UFF8+aYD0yqfKqGKhbu5SCxQ8o 8/Oq/5oB1R4hOWndH78YCWpJYr26jN/WH1xPqIk5EzGRQvCOozA0W5xKH97IDWBMBNMH Y0WDbao04zawxQ4KOJnTNikkxloDFpe+S+f7dQKG+i/FptDlCmaY8oqt6eUCp1pm9vYq X8UFcGVvGHLK+eqQ7oqFF47GZxMhxFX2l1wGKK2pnV3TPgZpORo9wjcTy9ePBW443kkD 4ZGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sourceware.org header.s=default header.b=ct8c5urR; dkim=pass header.i=@linaro.org header.s=google header.b=zLkPGL0v; spf=pass (google.com: domain of libc-alpha-return-109390-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) smtp.mailfrom="libc-alpha-return-109390-patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id n25si572794oij.175.2020.02.10.11.21.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Feb 2020 11:21:43 -0800 (PST) Received-SPF: pass (google.com: domain of libc-alpha-return-109390-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@sourceware.org header.s=default header.b=ct8c5urR; dkim=pass header.i=@linaro.org header.s=google header.b=zLkPGL0v; spf=pass (google.com: domain of libc-alpha-return-109390-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) smtp.mailfrom="libc-alpha-return-109390-patch=linaro.org@sourceware.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:subject:date:message-id:in-reply-to :references; q=dns; s=default; b=luZB3zwKXdx2aJOdDXzV4cneF+sNau0 yVoiNaCRSaBCcNgDtm1kbGcNhiwRQpZKbPLvUx4UY5FqC4nFNFufpuXBzqi6p/jx XWo2LJ9R9disaQJHR4SXwgGriOBLrnKpWshVnSVmqm5mse8evIuL1QZgyu2HUGmi Um3qPn4f1rkI= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:subject:date:message-id:in-reply-to :references; s=default; bh=fH43JH9qHv6JpJFVMGOnqZ0Fs+8=; b=ct8c5 urRghuZqEP349iLRHSd+w96PdOsTc9zyk+ckulNTrN+EWTgJcFU3KBqrUbD2ilQ8 sVOD+hgagotMbe+mggbYUHgS8ney2chzCS2w/Zwezd4huUDdhebuu7mXLcLEqxxi hXIc8vr9DiE22xqVELdrQubNVbA0+p5FyN83L8= Received: (qmail 99984 invoked by alias); 10 Feb 2020 19:20:55 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 99888 invoked by uid 89); 10 Feb 2020 19:20:54 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-22.1 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy=$20, prof, signextends, sign-extends X-HELO: mail-qk1-f193.google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=KKIsM6K+Sw9tL8/EQUjrldtm1V1EpjjLWkg7S/EJcsU=; b=zLkPGL0vSnNz5Gyk9NNBh9Ozk4poLIoY9rRXKvBQcrIchoYILz2wvV044sfskwuk0T kB8lQUeH0NezU/V3K8tK4Z7cJbjGtxhH8JlQVNSU0t+al2TJ/VmRXPicSNyrI/2128xO NWwAJ7KU/a1p+EbtFmvnZl4gzBvwusSfz7HFauQUfkoUmp49mbnvDLGdZHiDpXmlUVyO WnSCHvDq5eKSNdqJtTvNf5yNWkgl71HoPw2iYt8WUpf9Q3UP64J1wP76kzqrJq41LcpX nDWdJnesbnTQdbzHiCv+ngz9sOd4ejrrQ+EAC0GQ+SBQxuzThDO/usItIWB+t3mzK/ds OMqQ== Return-Path: From: Adhemerval Zanella To: libc-alpha@sourceware.org Subject: [PATCH 04/15] alpha: Refactor syscall and Use Linux kABI for syscall return Date: Mon, 10 Feb 2020 16:20:27 -0300 Message-Id: <20200210192038.23588-4-adhemerval.zanella@linaro.org> In-Reply-To: <20200210192038.23588-1-adhemerval.zanella@linaro.org> References: <20200210192038.23588-1-adhemerval.zanella@linaro.org> It highly unlikely that alpha will be ported to anything else than Linux, so this patch moves the generic unix syscall definition to Linux and adapt it to Linux kernel ABI. It changes the internal_syscall* macros to return a negative value instead of '$19' register value on 'err' macro argument. The macro INTERNAL_SYSCALL_DECL is no longer required, and the INTERNAL_SYSCALL_ERROR_P follows the other Linux kABIS. Checked on alpha-linux-gnu. --- sysdeps/unix/alpha/sysdep.h | 382 ------------------------- sysdeps/unix/sysv/linux/alpha/ioperm.c | 7 +- sysdeps/unix/sysv/linux/alpha/sysdep.h | 354 ++++++++++++++++++++++- 3 files changed, 348 insertions(+), 395 deletions(-) delete mode 100644 sysdeps/unix/alpha/sysdep.h -- 2.17.1 diff --git a/sysdeps/unix/alpha/sysdep.h b/sysdeps/unix/alpha/sysdep.h deleted file mode 100644 index 74db6b02b2..0000000000 --- a/sysdeps/unix/alpha/sysdep.h +++ /dev/null @@ -1,382 +0,0 @@ -/* Copyright (C) 1992-2020 Free Software Foundation, Inc. - This file is part of the GNU C Library. - Contributed by Brendan Kehoe (brendan@zen.org). - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library. If not, see - . */ - -#include -#include /* Defines RTLD_PRIVATE_ERRNO. */ - -#ifdef __ASSEMBLER__ - -#ifdef __linux__ -# include -#else -# include -#endif - -#define __LABEL(x) x##: - -#define LEAF(name, framesize) \ - .globl name; \ - .align 4; \ - .ent name, 0; \ - __LABEL(name) \ - .frame sp, framesize, ra - -#define ENTRY(name) \ - .globl name; \ - .align 4; \ - .ent name, 0; \ - __LABEL(name) \ - .frame sp, 0, ra - -/* Mark the end of function SYM. */ -#undef END -#define END(sym) .end sym - -#ifdef PROF -# define PSEUDO_PROF \ - .set noat; \ - lda AT, _mcount; \ - jsr AT, (AT), _mcount; \ - .set at -#else -# define PSEUDO_PROF -#endif - -#ifdef PROF -# define PSEUDO_PROLOGUE \ - .frame sp, 0, ra; \ - ldgp gp,0(pv); \ - PSEUDO_PROF; \ - .prologue 1 -#elif defined PIC -# define PSEUDO_PROLOGUE \ - .frame sp, 0, ra; \ - .prologue 0 -#else -# define PSEUDO_PROLOGUE \ - .frame sp, 0, ra; \ - ldgp gp,0(pv); \ - .prologue 1 -#endif /* PROF */ - -#ifdef PROF -# define USEPV_PROF std -#else -# define USEPV_PROF no -#endif - -#if RTLD_PRIVATE_ERRNO -# define SYSCALL_ERROR_LABEL $syscall_error -# define SYSCALL_ERROR_HANDLER \ -$syscall_error: \ - stl v0, rtld_errno(gp) !gprel; \ - lda v0, -1; \ - ret -# define SYSCALL_ERROR_FALLTHRU -#elif defined(PIC) -# define SYSCALL_ERROR_LABEL __syscall_error !samegp -# define SYSCALL_ERROR_HANDLER -# define SYSCALL_ERROR_FALLTHRU br SYSCALL_ERROR_LABEL -#else -# define SYSCALL_ERROR_LABEL $syscall_error -# define SYSCALL_ERROR_HANDLER \ -$syscall_error: \ - jmp $31, __syscall_error -# define SYSCALL_ERROR_FALLTHRU -#endif /* RTLD_PRIVATE_ERRNO */ - -/* Overridden by specific syscalls. */ -#undef PSEUDO_PREPARE_ARGS -#define PSEUDO_PREPARE_ARGS /* Nothing. */ - -#define PSEUDO(name, syscall_name, args) \ - .globl name; \ - .align 4; \ - .ent name,0; \ -__LABEL(name) \ - PSEUDO_PROLOGUE; \ - PSEUDO_PREPARE_ARGS \ - lda v0, SYS_ify(syscall_name); \ - call_pal PAL_callsys; \ - bne a3, SYSCALL_ERROR_LABEL - -#undef PSEUDO_END -#define PSEUDO_END(sym) \ - SYSCALL_ERROR_HANDLER; \ - END(sym) - -#define PSEUDO_NOERRNO(name, syscall_name, args) \ - .globl name; \ - .align 4; \ - .ent name,0; \ -__LABEL(name) \ - PSEUDO_PROLOGUE; \ - PSEUDO_PREPARE_ARGS \ - lda v0, SYS_ify(syscall_name); \ - call_pal PAL_callsys; - -#undef PSEUDO_END_NOERRNO -#define PSEUDO_END_NOERRNO(sym) END(sym) - -#define ret_NOERRNO ret - -#define PSEUDO_ERRVAL(name, syscall_name, args) \ - .globl name; \ - .align 4; \ - .ent name,0; \ -__LABEL(name) \ - PSEUDO_PROLOGUE; \ - PSEUDO_PREPARE_ARGS \ - lda v0, SYS_ify(syscall_name); \ - call_pal PAL_callsys; - -#undef PSEUDO_END_ERRVAL -#define PSEUDO_END_ERRVAL(sym) END(sym) - -#define ret_ERRVAL ret - -#define r0 v0 -#define r1 a4 - -#define MOVE(x,y) mov x,y - -#else /* !ASSEMBLER */ - -/* In order to get __set_errno() definition in INLINE_SYSCALL. */ -#include - -/* ??? Linux needs to be able to override INLINE_SYSCALL for one - particular special case. Make this easy. */ - -#undef INLINE_SYSCALL -#define INLINE_SYSCALL(name, nr, args...) \ - INLINE_SYSCALL1(name, nr, args) - -#define INLINE_SYSCALL1(name, nr, args...) \ -({ \ - long _sc_ret, _sc_err; \ - inline_syscall##nr(__NR_##name, args); \ - if (__builtin_expect (_sc_err, 0)) \ - { \ - __set_errno (_sc_ret); \ - _sc_ret = -1L; \ - } \ - _sc_ret; \ -}) - -#define INTERNAL_SYSCALL(name, err_out, nr, args...) \ - INTERNAL_SYSCALL1(name, err_out, nr, args) - -#define INTERNAL_SYSCALL1(name, err_out, nr, args...) \ - INTERNAL_SYSCALL_NCS(__NR_##name, err_out, nr, args) - -#define INTERNAL_SYSCALL_NCS(name, err_out, nr, args...) \ -({ \ - long _sc_ret, _sc_err; \ - inline_syscall##nr(name, args); \ - err_out = _sc_err; \ - _sc_ret; \ -}) - -#define INTERNAL_SYSCALL_DECL(err) \ - long int err __attribute__((unused)) - -/* The normal Alpha calling convention sign-extends 32-bit quantties - no matter what the "real" sign of the 32-bit type. We want to - preserve that when filling in values for the kernel. */ -#define syscall_promote(arg) \ - (sizeof (arg) == 4 ? (long)(int)(long)(arg) : (long)(arg)) - -/* Make sure and "use" the variable that we're not returning, - in order to suppress unused variable warnings. */ -#define INTERNAL_SYSCALL_ERROR_P(val, err) ((void)val, err) -#define INTERNAL_SYSCALL_ERRNO(val, err) ((void)err, val) - -#define inline_syscall_clobbers \ - "$1", "$2", "$3", "$4", "$5", "$6", "$7", "$8", \ - "$22", "$23", "$24", "$25", "$27", "$28", "memory" - -/* It is moderately important optimization-wise to limit the lifetime - of the hard-register variables as much as possible. Thus we copy - in/out as close to the asm as possible. */ - -#define inline_syscall0(name, args...) \ -{ \ - register long _sc_19 __asm__("$19"); \ - register long _sc_0 = name; \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2" \ - : "+v"(_sc_0), "=r"(_sc_19) \ - : : inline_syscall_clobbers, \ - "$16", "$17", "$18", "$20", "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall1(name,arg1) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_19 __asm__("$19"); \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3" \ - : "+v"(_sc_0), "=r"(_sc_19), "+r"(_sc_16) \ - : : inline_syscall_clobbers, \ - "$17", "$18", "$20", "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall2(name,arg1,arg2) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _tmp_17 = syscall_promote (arg2); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_17 __asm__("$17") = _tmp_17; \ - register long _sc_19 __asm__("$19"); \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3 %4" \ - : "+v"(_sc_0), "=r"(_sc_19), \ - "+r"(_sc_16), "+r"(_sc_17) \ - : : inline_syscall_clobbers, \ - "$18", "$20", "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall3(name,arg1,arg2,arg3) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _tmp_17 = syscall_promote (arg2); \ - register long _tmp_18 = syscall_promote (arg3); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_17 __asm__("$17") = _tmp_17; \ - register long _sc_18 __asm__("$18") = _tmp_18; \ - register long _sc_19 __asm__("$19"); \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3 %4 %5" \ - : "+v"(_sc_0), "=r"(_sc_19), "+r"(_sc_16), \ - "+r"(_sc_17), "+r"(_sc_18) \ - : : inline_syscall_clobbers, "$20", "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall4(name,arg1,arg2,arg3,arg4) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _tmp_17 = syscall_promote (arg2); \ - register long _tmp_18 = syscall_promote (arg3); \ - register long _tmp_19 = syscall_promote (arg4); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_17 __asm__("$17") = _tmp_17; \ - register long _sc_18 __asm__("$18") = _tmp_18; \ - register long _sc_19 __asm__("$19") = _tmp_19; \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3 %4 %5 %6" \ - : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ - "+r"(_sc_17), "+r"(_sc_18) \ - : : inline_syscall_clobbers, "$20", "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall5(name,arg1,arg2,arg3,arg4,arg5) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _tmp_17 = syscall_promote (arg2); \ - register long _tmp_18 = syscall_promote (arg3); \ - register long _tmp_19 = syscall_promote (arg4); \ - register long _tmp_20 = syscall_promote (arg5); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_17 __asm__("$17") = _tmp_17; \ - register long _sc_18 __asm__("$18") = _tmp_18; \ - register long _sc_19 __asm__("$19") = _tmp_19; \ - register long _sc_20 __asm__("$20") = _tmp_20; \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3 %4 %5 %6 %7" \ - : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ - "+r"(_sc_17), "+r"(_sc_18), "+r"(_sc_20) \ - : : inline_syscall_clobbers, "$21"); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} - -#define inline_syscall6(name,arg1,arg2,arg3,arg4,arg5,arg6) \ -{ \ - register long _tmp_16 = syscall_promote (arg1); \ - register long _tmp_17 = syscall_promote (arg2); \ - register long _tmp_18 = syscall_promote (arg3); \ - register long _tmp_19 = syscall_promote (arg4); \ - register long _tmp_20 = syscall_promote (arg5); \ - register long _tmp_21 = syscall_promote (arg6); \ - register long _sc_0 = name; \ - register long _sc_16 __asm__("$16") = _tmp_16; \ - register long _sc_17 __asm__("$17") = _tmp_17; \ - register long _sc_18 __asm__("$18") = _tmp_18; \ - register long _sc_19 __asm__("$19") = _tmp_19; \ - register long _sc_20 __asm__("$20") = _tmp_20; \ - register long _sc_21 __asm__("$21") = _tmp_21; \ - __asm__ __volatile__ \ - ("callsys # %0 %1 <= %2 %3 %4 %5 %6 %7 %8" \ - : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ - "+r"(_sc_17), "+r"(_sc_18), "+r"(_sc_20), \ - "+r"(_sc_21) \ - : : inline_syscall_clobbers); \ - _sc_ret = _sc_0, _sc_err = _sc_19; \ -} -#endif /* ASSEMBLER */ - -/* Pointer mangling support. Note that tls access is slow enough that - we don't deoptimize things by placing the pointer check value there. */ - -#ifdef __ASSEMBLER__ -# if IS_IN (rtld) -# define PTR_MANGLE(dst, src, tmp) \ - ldah tmp, __pointer_chk_guard_local($29) !gprelhigh; \ - ldq tmp, __pointer_chk_guard_local(tmp) !gprellow; \ - xor src, tmp, dst -# define PTR_MANGLE2(dst, src, tmp) \ - xor src, tmp, dst -# elif defined SHARED -# define PTR_MANGLE(dst, src, tmp) \ - ldq tmp, __pointer_chk_guard; \ - xor src, tmp, dst -# else -# define PTR_MANGLE(dst, src, tmp) \ - ldq tmp, __pointer_chk_guard_local; \ - xor src, tmp, dst -# endif -# define PTR_MANGLE2(dst, src, tmp) \ - xor src, tmp, dst -# define PTR_DEMANGLE(dst, tmp) PTR_MANGLE(dst, dst, tmp) -# define PTR_DEMANGLE2(dst, tmp) PTR_MANGLE2(dst, dst, tmp) -#else -# include -# if (IS_IN (rtld) \ - || (!defined SHARED && (IS_IN (libc) \ - || IS_IN (libpthread)))) -extern uintptr_t __pointer_chk_guard_local attribute_relro attribute_hidden; -# define PTR_MANGLE(var) \ - (var) = (__typeof (var)) ((uintptr_t) (var) ^ __pointer_chk_guard_local) -# else -extern uintptr_t __pointer_chk_guard attribute_relro; -# define PTR_MANGLE(var) \ - (var) = (__typeof(var)) ((uintptr_t) (var) ^ __pointer_chk_guard) -# endif -# define PTR_DEMANGLE(var) PTR_MANGLE(var) -#endif /* ASSEMBLER */ diff --git a/sysdeps/unix/sysv/linux/alpha/ioperm.c b/sysdeps/unix/sysv/linux/alpha/ioperm.c index 086c782b9f..cf775674b4 100644 --- a/sysdeps/unix/sysv/linux/alpha/ioperm.c +++ b/sysdeps/unix/sysv/linux/alpha/ioperm.c @@ -196,12 +196,7 @@ stl_mb(unsigned int val, unsigned long addr) static inline void __sethae(unsigned long value) { - register unsigned long r16 __asm__("$16") = value; - register unsigned long r0 __asm__("$0") = __NR_sethae; - __asm__ __volatile__ ("callsys" - : "=r"(r0) - : "0"(r0), "r" (r16) - : inline_syscall_clobbers, "$19"); + INLINE_SYSCALL_CALL (sethae, value); } extern long __pciconfig_iobase(enum __pciconfig_iobase_which __which, diff --git a/sysdeps/unix/sysv/linux/alpha/sysdep.h b/sysdeps/unix/sysv/linux/alpha/sysdep.h index f8c9e589ec..ca0b4e475c 100644 --- a/sysdeps/unix/sysv/linux/alpha/sysdep.h +++ b/sysdeps/unix/sysv/linux/alpha/sysdep.h @@ -19,14 +19,10 @@ #ifndef _LINUX_ALPHA_SYSDEP_H #define _LINUX_ALPHA_SYSDEP_H 1 -#ifdef __ASSEMBLER__ -#include -#include -#endif - /* There is some commonality. */ #include -#include +#include +#include /* Defines RTLD_PRIVATE_ERRNO. */ #include @@ -39,4 +35,348 @@ #define SINGLE_THREAD_BY_GLOBAL 1 -#endif /* _LINUX_ALPHA_SYSDEP_H */ +#ifdef __ASSEMBLER__ +#include +#include + +#define __LABEL(x) x##: + +#define LEAF(name, framesize) \ + .globl name; \ + .align 4; \ + .ent name, 0; \ + __LABEL(name) \ + .frame sp, framesize, ra + +#define ENTRY(name) \ + .globl name; \ + .align 4; \ + .ent name, 0; \ + __LABEL(name) \ + .frame sp, 0, ra + +/* Mark the end of function SYM. */ +#undef END +#define END(sym) .end sym + +#ifdef PROF +# define PSEUDO_PROF \ + .set noat; \ + lda AT, _mcount; \ + jsr AT, (AT), _mcount; \ + .set at +#else +# define PSEUDO_PROF +#endif + +#ifdef PROF +# define PSEUDO_PROLOGUE \ + .frame sp, 0, ra; \ + ldgp gp,0(pv); \ + PSEUDO_PROF; \ + .prologue 1 +#elif defined PIC +# define PSEUDO_PROLOGUE \ + .frame sp, 0, ra; \ + .prologue 0 +#else +# define PSEUDO_PROLOGUE \ + .frame sp, 0, ra; \ + ldgp gp,0(pv); \ + .prologue 1 +#endif /* PROF */ + +#ifdef PROF +# define USEPV_PROF std +#else +# define USEPV_PROF no +#endif + +#if RTLD_PRIVATE_ERRNO +# define SYSCALL_ERROR_LABEL $syscall_error +# define SYSCALL_ERROR_HANDLER \ +$syscall_error: \ + stl v0, rtld_errno(gp) !gprel; \ + lda v0, -1; \ + ret +# define SYSCALL_ERROR_FALLTHRU +#elif defined(PIC) +# define SYSCALL_ERROR_LABEL __syscall_error !samegp +# define SYSCALL_ERROR_HANDLER +# define SYSCALL_ERROR_FALLTHRU br SYSCALL_ERROR_LABEL +#else +# define SYSCALL_ERROR_LABEL $syscall_error +# define SYSCALL_ERROR_HANDLER \ +$syscall_error: \ + jmp $31, __syscall_error +# define SYSCALL_ERROR_FALLTHRU +#endif /* RTLD_PRIVATE_ERRNO */ + +/* Overridden by specific syscalls. */ +#undef PSEUDO_PREPARE_ARGS +#define PSEUDO_PREPARE_ARGS /* Nothing. */ + +#define PSEUDO(name, syscall_name, args) \ + .globl name; \ + .align 4; \ + .ent name,0; \ +__LABEL(name) \ + PSEUDO_PROLOGUE; \ + PSEUDO_PREPARE_ARGS \ + lda v0, SYS_ify(syscall_name); \ + call_pal PAL_callsys; \ + bne a3, SYSCALL_ERROR_LABEL + +#undef PSEUDO_END +#define PSEUDO_END(sym) \ + SYSCALL_ERROR_HANDLER; \ + END(sym) + +#define PSEUDO_NOERRNO(name, syscall_name, args) \ + .globl name; \ + .align 4; \ + .ent name,0; \ +__LABEL(name) \ + PSEUDO_PROLOGUE; \ + PSEUDO_PREPARE_ARGS \ + lda v0, SYS_ify(syscall_name); \ + call_pal PAL_callsys; + +#undef PSEUDO_END_NOERRNO +#define PSEUDO_END_NOERRNO(sym) END(sym) + +#define ret_NOERRNO ret + +#define PSEUDO_ERRVAL(name, syscall_name, args) \ + .globl name; \ + .align 4; \ + .ent name,0; \ +__LABEL(name) \ + PSEUDO_PROLOGUE; \ + PSEUDO_PREPARE_ARGS \ + lda v0, SYS_ify(syscall_name); \ + call_pal PAL_callsys; + +#undef PSEUDO_END_ERRVAL +#define PSEUDO_END_ERRVAL(sym) END(sym) + +#define ret_ERRVAL ret + +#define r0 v0 +#define r1 a4 + +#define MOVE(x,y) mov x,y + +#else /* !ASSEMBLER */ + +/* In order to get __set_errno() definition in INLINE_SYSCALL. */ +#include + +#undef INLINE_SYSCALL +#define INLINE_SYSCALL(name, nr, args...) \ +({ \ + INTERNAL_SYSCALL_DECL (_sc_err); \ + long int _sc_ret = INTERNAL_SYSCALL (name, sc_err, nr, args); \ + if (INTERNAL_SYSCALL_ERROR_P (_sc_ret, _sc_err)) \ + { \ + __set_errno (INTERNAL_SYSCALL_ERRNO (_sc_ret, _sc_err)); \ + _sc_ret = -1L; \ + } \ + _sc_ret; \ +}) + +#define INTERNAL_SYSCALL(name, err_out, nr, args...) \ + internal_syscall##nr(__NR_##name, args) + +#define INTERNAL_SYSCALL_NCS(name, err_out, nr, args...) \ + internal_syscall##nr(name, args) + +#define INTERNAL_SYSCALL_DECL(err) do { } while (0) + +/* The normal Alpha calling convention sign-extends 32-bit quantties + no matter what the "real" sign of the 32-bit type. We want to + preserve that when filling in values for the kernel. */ +#define syscall_promote(arg) \ + (sizeof (arg) == 4 ? (long)(int)(long)(arg) : (long)(arg)) + +/* Make sure and "use" the variable that we're not returning, + in order to suppress unused variable warnings. */ +#define INTERNAL_SYSCALL_ERROR_P(val, err) \ + ((unsigned long) (val) >= (unsigned long) -4095) +#define INTERNAL_SYSCALL_ERRNO(val, err) (-(val)) + +#define internal_syscall_clobbers \ + "$1", "$2", "$3", "$4", "$5", "$6", "$7", "$8", \ + "$22", "$23", "$24", "$25", "$27", "$28", "memory" + +/* It is moderately important optimization-wise to limit the lifetime + of the hard-register variables as much as possible. Thus we copy + in/out as close to the asm as possible. */ + +#define internal_syscall0(name, args...) \ +({ \ + register long _sc_19 __asm__("$19"); \ + register long _sc_0 = name; \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2" \ + : "+v"(_sc_0), "=r"(_sc_19) \ + : : internal_syscall_clobbers, \ + "$16", "$17", "$18", "$20", "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall1(name,arg1) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_19 __asm__("$19"); \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3" \ + : "+v"(_sc_0), "=r"(_sc_19), "+r"(_sc_16) \ + : : internal_syscall_clobbers, \ + "$17", "$18", "$20", "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall2(name,arg1,arg2) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _tmp_17 = syscall_promote (arg2); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_17 __asm__("$17") = _tmp_17; \ + register long _sc_19 __asm__("$19"); \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3 %4" \ + : "+v"(_sc_0), "=r"(_sc_19), \ + "+r"(_sc_16), "+r"(_sc_17) \ + : : internal_syscall_clobbers, \ + "$18", "$20", "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall3(name,arg1,arg2,arg3) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _tmp_17 = syscall_promote (arg2); \ + register long _tmp_18 = syscall_promote (arg3); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_17 __asm__("$17") = _tmp_17; \ + register long _sc_18 __asm__("$18") = _tmp_18; \ + register long _sc_19 __asm__("$19"); \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3 %4 %5" \ + : "+v"(_sc_0), "=r"(_sc_19), "+r"(_sc_16), \ + "+r"(_sc_17), "+r"(_sc_18) \ + : : internal_syscall_clobbers, "$20", "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall4(name,arg1,arg2,arg3,arg4) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _tmp_17 = syscall_promote (arg2); \ + register long _tmp_18 = syscall_promote (arg3); \ + register long _tmp_19 = syscall_promote (arg4); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_17 __asm__("$17") = _tmp_17; \ + register long _sc_18 __asm__("$18") = _tmp_18; \ + register long _sc_19 __asm__("$19") = _tmp_19; \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3 %4 %5 %6" \ + : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ + "+r"(_sc_17), "+r"(_sc_18) \ + : : internal_syscall_clobbers, "$20", "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall5(name,arg1,arg2,arg3,arg4,arg5) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _tmp_17 = syscall_promote (arg2); \ + register long _tmp_18 = syscall_promote (arg3); \ + register long _tmp_19 = syscall_promote (arg4); \ + register long _tmp_20 = syscall_promote (arg5); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_17 __asm__("$17") = _tmp_17; \ + register long _sc_18 __asm__("$18") = _tmp_18; \ + register long _sc_19 __asm__("$19") = _tmp_19; \ + register long _sc_20 __asm__("$20") = _tmp_20; \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3 %4 %5 %6 %7" \ + : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ + "+r"(_sc_17), "+r"(_sc_18), "+r"(_sc_20) \ + : : internal_syscall_clobbers, "$21"); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) + +#define internal_syscall6(name,arg1,arg2,arg3,arg4,arg5,arg6) \ +({ \ + register long _tmp_16 = syscall_promote (arg1); \ + register long _tmp_17 = syscall_promote (arg2); \ + register long _tmp_18 = syscall_promote (arg3); \ + register long _tmp_19 = syscall_promote (arg4); \ + register long _tmp_20 = syscall_promote (arg5); \ + register long _tmp_21 = syscall_promote (arg6); \ + register long _sc_0 = name; \ + register long _sc_16 __asm__("$16") = _tmp_16; \ + register long _sc_17 __asm__("$17") = _tmp_17; \ + register long _sc_18 __asm__("$18") = _tmp_18; \ + register long _sc_19 __asm__("$19") = _tmp_19; \ + register long _sc_20 __asm__("$20") = _tmp_20; \ + register long _sc_21 __asm__("$21") = _tmp_21; \ + __asm__ __volatile__ \ + ("callsys # %0 %1 <= %2 %3 %4 %5 %6 %7 %8" \ + : "+v"(_sc_0), "+r"(_sc_19), "+r"(_sc_16), \ + "+r"(_sc_17), "+r"(_sc_18), "+r"(_sc_20), \ + "+r"(_sc_21) \ + : : internal_syscall_clobbers); \ + _sc_19 != 0 ? -_sc_0 : _sc_0; \ +}) +#endif /* ASSEMBLER */ + +/* Pointer mangling support. Note that tls access is slow enough that + we don't deoptimize things by placing the pointer check value there. */ + +#ifdef __ASSEMBLER__ +# if IS_IN (rtld) +# define PTR_MANGLE(dst, src, tmp) \ + ldah tmp, __pointer_chk_guard_local($29) !gprelhigh; \ + ldq tmp, __pointer_chk_guard_local(tmp) !gprellow; \ + xor src, tmp, dst +# define PTR_MANGLE2(dst, src, tmp) \ + xor src, tmp, dst +# elif defined SHARED +# define PTR_MANGLE(dst, src, tmp) \ + ldq tmp, __pointer_chk_guard; \ + xor src, tmp, dst +# else +# define PTR_MANGLE(dst, src, tmp) \ + ldq tmp, __pointer_chk_guard_local; \ + xor src, tmp, dst +# endif +# define PTR_MANGLE2(dst, src, tmp) \ + xor src, tmp, dst +# define PTR_DEMANGLE(dst, tmp) PTR_MANGLE(dst, dst, tmp) +# define PTR_DEMANGLE2(dst, tmp) PTR_MANGLE2(dst, dst, tmp) +#else +# include +# if (IS_IN (rtld) \ + || (!defined SHARED && (IS_IN (libc) \ + || IS_IN (libpthread)))) +extern uintptr_t __pointer_chk_guard_local attribute_relro attribute_hidden; +# define PTR_MANGLE(var) \ + (var) = (__typeof (var)) ((uintptr_t) (var) ^ __pointer_chk_guard_local) +# else +extern uintptr_t __pointer_chk_guard attribute_relro; +# define PTR_MANGLE(var) \ + (var) = (__typeof(var)) ((uintptr_t) (var) ^ __pointer_chk_guard) +# endif +# define PTR_DEMANGLE(var) PTR_MANGLE(var) +#endif /* ASSEMBLER */ + +#endif /* _LINUX_ALPHA_SYSDEP_H */