From patchwork Tue Feb 2 04:06:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanjun Guo X-Patchwork-Id: 60991 Delivered-To: patch@linaro.org Received: by 10.55.15.231 with SMTP id 100csp467175qkp; Mon, 1 Feb 2016 20:07:23 -0800 (PST) X-Received: by 10.98.42.81 with SMTP id q78mr43561119pfq.142.1454386043704; Mon, 01 Feb 2016 20:07:23 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n62si4334174pfa.183.2016.02.01.20.07.23; Mon, 01 Feb 2016 20:07:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752137AbcBBEHX (ORCPT + 3 others); Mon, 1 Feb 2016 23:07:23 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:34959 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751936AbcBBEHW (ORCPT ); Mon, 1 Feb 2016 23:07:22 -0500 Received: from 172.24.1.51 (EHLO szxeml430-hub.china.huawei.com) ([172.24.1.51]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BVO31416; Tue, 02 Feb 2016 12:07:07 +0800 (CST) Received: from localhost (10.177.17.188) by szxeml430-hub.china.huawei.com (10.82.67.185) with Microsoft SMTP Server id 14.3.235.1; Tue, 2 Feb 2016 12:07:05 +0800 From: Hanjun Guo To: , CC: Subject: [PATCH 2/3] arm64: entry: always restore x0 from the stack on syscall return Date: Tue, 2 Feb 2016 12:06:46 +0800 Message-ID: <1454386007-11860-3-git-send-email-guohanjun@huawei.com> X-Mailer: git-send-email 1.7.10.msysgit.1 In-Reply-To: <1454386007-11860-1-git-send-email-guohanjun@huawei.com> References: <1454386007-11860-1-git-send-email-guohanjun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.17.188] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.56B02B6D.012F, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1ca9a698daa5146e9b1c281115261b32 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit 412fcb6cebd758d080cacd5a41a0cbc656ea5fce upstream. We have a micro-optimisation on the fast syscall return path where we take care to keep x0 live with the return value from the syscall so that we can avoid restoring it from the stack. The benefit of doing this is fairly suspect, since we will be restoring x1 from the stack anyway (which lives adjacent in the pt_regs structure) and the only additional cost is saving x0 back to pt_regs after the syscall handler, which could be seen as a poor man's prefetch. More importantly, this causes issues with the context tracking code. The ct_user_enter macro ends up branching into C code, which is free to use x0 as a scratch register and consequently leads to us returning junk back to userspace as the syscall return value. Rather than special case the context-tracking code, this patch removes the questionable optimisation entirely. Cc: Cc: Larry Bassel Cc: Kevin Hilman Reviewed-by: Catalin Marinas Reported-by: Hanjun Guo Tested-by: Hanjun Guo Signed-off-by: Will Deacon Signed-off-by: Hanjun Guo --- arch/arm64/kernel/entry.S | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 6657a09..3236b3e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -116,7 +116,7 @@ */ .endm - .macro kernel_exit, el, ret = 0 + .macro kernel_exit, el ldp x21, x22, [sp, #S_PC] // load ELR, SPSR .if \el == 0 ct_user_enter @@ -143,11 +143,7 @@ .endif msr elr_el1, x21 // set up the return data msr spsr_el1, x22 - .if \ret - ldr x1, [sp, #S_X1] // preserve x0 (syscall return) - .else ldp x0, x1, [sp, #16 * 0] - .endif ldp x2, x3, [sp, #16 * 1] ldp x4, x5, [sp, #16 * 2] ldp x6, x7, [sp, #16 * 3] @@ -609,22 +605,21 @@ ENDPROC(cpu_switch_to) */ ret_fast_syscall: disable_irq // disable interrupts + str x0, [sp, #S_X0] // returned x0 ldr x1, [tsk, #TI_FLAGS] // re-check for syscall tracing and x2, x1, #_TIF_SYSCALL_WORK cbnz x2, ret_fast_syscall_trace and x2, x1, #_TIF_WORK_MASK - cbnz x2, fast_work_pending + cbnz x2, work_pending enable_step_tsk x1, x2 - kernel_exit 0, ret = 1 + kernel_exit 0 ret_fast_syscall_trace: enable_irq // enable interrupts - b __sys_trace_return + b __sys_trace_return_skipped // we already saved x0 /* * Ok, we need to do extra processing, enter the slow path. */ -fast_work_pending: - str x0, [sp, #S_X0] // returned x0 work_pending: tbnz x1, #TIF_NEED_RESCHED, work_resched /* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */ @@ -648,7 +643,7 @@ ret_to_user: cbnz x2, work_pending enable_step_tsk x1, x2 no_work_pending: - kernel_exit 0, ret = 0 + kernel_exit 0 ENDPROC(ret_to_user) /*