From patchwork Mon May 14 09:46:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 135718 Delivered-To: patch@linaro.org Received: by 2002:a2e:9706:0:0:0:0:0 with SMTP id r6-v6csp1535233lji; Mon, 14 May 2018 02:51:23 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpttLWMMK9EO0DZibWVFffB+ecEDMmEKR73vHcfb1h/ARly3Rte5YD4Zxx4BnC7SG1pLOnG X-Received: by 2002:a17:902:8308:: with SMTP id bd8-v6mr4530139plb.195.1526291483774; Mon, 14 May 2018 02:51:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526291483; cv=none; d=google.com; s=arc-20160816; b=YrayNa8cOJKAiy2vjaGlyHEUpT/HowaWkRyi3LEdgx1sbma4IgqwYhBo/qoMpmoDxH yKSd5uBDBOFffQiEXsGTXrdT+qXgw28QXq2ti8LO2p5EwKrw+d4HbOP5zsTynS4dhBaG HgQdBZBtBnLqIZk/HdXzNnsScA8er6j3ONz9OUHdGY01AMiCzLkiLpXRm4iMZiVS8EfG MWBtLbi8GHX5NPWMlYB/iiV2VT8eVg3QjNnUxoWK9jtSwysb9JUz6kUvnOZyX9nlmV1l I1ygYszSCTRi7P4Ht0QLIe7tG1JszExyf/7G8ilGnId6MU3Rbk/O1Blkvu/KQ4GXAxxQ o7Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=CKEP7v0rtC9Bph/8/ynLmj/1DNV2upqBtveLaiCC7Es=; b=UtPKTMVwvTV6mL6z3zl+EiObIVYNL/GpaHQQDGklZYCes5qGurrphJqbEOL5OWArj3 RcUICPd9BOmeWxVxoVDBuUXr7zcS7vbSc9c5YZuGcM3H1CrzvOPxIv827JulgQg+2Okg gwizR6Ksz5Vig1Sz7Bs/aSRHDJyBl8daYdQi9qAMFB0QTstNv2ckxImKQ2HXmB3NsGgY 3qKNsPkOrndifhBqf2WZoF+HX8FxyFrwjYGcnucGWyhMebWKYsUOKs1jJI+jtR4tBIoE mx3Fx79HUSjUuh39VfO4qg0HlrA6aOfbVBLfkI95UUG2I9vQjEqLWU118fAEwQq20Ceg zH8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 34-v6si9076344plp.409.2018.05.14.02.51.23; Mon, 14 May 2018 02:51:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752291AbeENJvU (ORCPT + 29 others); Mon, 14 May 2018 05:51:20 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:38286 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752556AbeENJrq (ORCPT ); Mon, 14 May 2018 05:47:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7759D16A0; Mon, 14 May 2018 02:47:45 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A96E3F25D; Mon, 14 May 2018 02:47:43 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, dave.martin@arm.com, james.morse@arm.com, linux@dominikbrodowski.net, linux-fsdevel@vger.kernel.org, marc.zyngier@arm.com, mark.rutland@arm.com, viro@zeniv.linux.org.uk, will.deacon@arm.com Subject: [PATCH 09/18] arm64: convert syscall trace logic to C Date: Mon, 14 May 2018 10:46:31 +0100 Message-Id: <20180514094640.27569-10-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180514094640.27569-1-mark.rutland@arm.com> References: <20180514094640.27569-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently syscall tracing is a tricky assembly state machine, which can be rather difficult to follow, and even harder to modify. Before we start fiddling with it for pt_regs syscalls, let's convert it to C. This is not intended to have any functional change. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/kernel/entry.S | 53 ++---------------------------------------- arch/arm64/kernel/syscall.c | 56 +++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 56 insertions(+), 53 deletions(-) -- 2.11.0 diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index d6e057500eaf..5c60369b52fc 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -866,24 +866,6 @@ el0_error_naked: b ret_to_user ENDPROC(el0_error) - -/* - * This is the fast syscall return path. We do as little as possible here, - * and this includes saving x0 back into the kernel stack. - */ -ret_fast_syscall: - disable_daif - ldr x1, [tsk, #TSK_TI_FLAGS] // re-check for syscall tracing - and x2, x1, #_TIF_SYSCALL_WORK - cbnz x2, ret_fast_syscall_trace - and x2, x1, #_TIF_WORK_MASK - cbnz x2, work_pending - enable_step_tsk x1, x2 - kernel_exit 0 -ret_fast_syscall_trace: - enable_daif - b __sys_trace_return_skipped // we already saved x0 - /* * Ok, we need to do extra processing, enter the slow path. */ @@ -939,44 +921,13 @@ alternative_else_nop_endif #endif el0_svc_naked: // compat entry point - stp x0, xscno, [sp, #S_ORIG_X0] // save the original x0 and syscall number - enable_daif - ct_user_exit 1 - - tst x16, #_TIF_SYSCALL_WORK // check for syscall hooks - b.ne __sys_trace mov x0, sp mov w1, wscno mov w2, wsc_nr mov x3, stbl - bl invoke_syscall - b ret_fast_syscall -ENDPROC(el0_svc) - - /* - * This is the really slow path. We're going to be doing context - * switches, and waiting for our parent to respond. - */ -__sys_trace: - cmp wscno, #NO_SYSCALL // user-issued syscall(-1)? - b.ne 1f - mov x0, #-ENOSYS // set default errno if so - str x0, [sp, #S_X0] -1: mov x0, sp - bl syscall_trace_enter - cmp w0, #NO_SYSCALL // skip the syscall? - b.eq __sys_trace_return_skipped - - mov x0, sp - mov w1, wscno - mov w2, wsc_nr - mov x3, stbl - bl invoke_syscall - -__sys_trace_return_skipped: - mov x0, sp - bl syscall_trace_exit + bl el0_svc_common b ret_to_user +ENDPROC(el0_svc) .popsection // .entry.text diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 58d7569f47df..5df857e32b48 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -1,8 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 +#include +#include #include #include +#include +#include + long do_ni_syscall(struct pt_regs *regs); typedef long (*syscall_fn_t)(unsigned long, unsigned long, @@ -16,8 +21,8 @@ static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn) regs->regs[4], regs->regs[5]); } -asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr, - syscall_fn_t syscall_table[]) +static void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr, + syscall_fn_t syscall_table[]) { if (scno < sc_nr) { syscall_fn_t syscall_fn; @@ -27,3 +32,50 @@ asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr, regs->regs[0] = do_ni_syscall(regs); } } + +static inline bool has_syscall_work(unsigned long flags) +{ + return unlikely(flags & _TIF_SYSCALL_WORK); +} + +int syscall_trace_enter(struct pt_regs *regs); +void syscall_trace_exit(struct pt_regs *regs); + +asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, + syscall_fn_t syscall_table[]) +{ + unsigned long flags = current_thread_info()->flags; + + regs->orig_x0 = regs->regs[0]; + regs->syscallno = scno; + + local_daif_restore(DAIF_PROCCTX); + user_exit(); + + if (has_syscall_work(flags)) { + /* set default errno for user-issued syscall(-1) */ + if (scno == NO_SYSCALL) + regs->regs[0] = -ENOSYS; + scno = syscall_trace_enter(regs); + if (scno == NO_SYSCALL) + goto trace_exit; + } + + invoke_syscall(regs, scno, sc_nr, syscall_table); + + /* + * The tracing status may have changed under our feet, so we have to + * check again. However, if we were tracing entry, then we always trace + * exit regardless, as the old entry assembly did. + */ + if (!has_syscall_work(flags)) { + local_daif_mask(); + flags = current_thread_info()->flags; + if (!has_syscall_work(flags)) + return; + local_daif_restore(DAIF_PROCCTX); + } + +trace_exit: + syscall_trace_exit(regs); +}