From patchwork Mon Jul 25 16:26:05 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 3113 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 0ABEB23E54 for ; Mon, 25 Jul 2011 16:26:15 +0000 (UTC) Received: from mail-qy0-f173.google.com (mail-qy0-f173.google.com [209.85.216.173]) by fiordland.canonical.com (Postfix) with ESMTP id A765EA1813D for ; Mon, 25 Jul 2011 16:26:14 +0000 (UTC) Received: by qyk10 with SMTP id 10so1165710qyk.11 for ; Mon, 25 Jul 2011 09:26:14 -0700 (PDT) Received: by 10.229.1.217 with SMTP id 25mr1167152qcg.38.1311611174029; Mon, 25 Jul 2011 09:26:14 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.229.217.78 with SMTP id hl14cs82200qcb; Mon, 25 Jul 2011 09:26:13 -0700 (PDT) Received: by 10.227.32.129 with SMTP id c1mr3989254wbd.32.1311611172666; Mon, 25 Jul 2011 09:26:12 -0700 (PDT) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx.google.com with ESMTPS id ff20si9768704wbb.126.2011.07.25.09.26.11 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 25 Jul 2011 09:26:12 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=74.125.82.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by wwe6 with SMTP id 6so4118079wwe.31 for ; Mon, 25 Jul 2011 09:26:11 -0700 (PDT) Received: by 10.227.196.208 with SMTP id eh16mr4060518wbb.37.1311611171675; Mon, 25 Jul 2011 09:26:11 -0700 (PDT) Received: from e200948.peterhouse.linaro.org (fw-lnat.cambridge.arm.com [217.140.96.63]) by mx.google.com with ESMTPS id fn12sm4452582wbb.38.2011.07.25.09.26.10 (version=SSLv3 cipher=OTHER); Mon, 25 Jul 2011 09:26:10 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Cc: patches@linaro.org, Nicolas Pitre Subject: [PATCH] ARM: alignment: Prevent ignoring of faults with ARMv6 unaligned access model Date: Mon, 25 Jul 2011 17:26:05 +0100 Message-Id: <1311611165-20064-1-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 Currently, it's possible to set the kernel to ignore alignment faults when changing the alignment fault handling mode at runtime via /proc/sys/alignment, even though this is undesirable on ARMv6 and above, where it can result in infinite spins where an un-fixed- up instruction repeatedly faults. In addition, the kernel clobbers any alignment mode specified on the command-line if running on ARMv6 or above. This patch factors out the necessary safety check into a couple of new helper functions, and checks and modifies the fault handling mode as appropriate on boot and on writes to /proc/cpu/alignment. Prior to ARMv6, the behaviour is unchanged. For ARMv6 and above, the behaviour changes as follows: * Attempting to ignore faults on ARMv6 results in the mode being forced to UM_FIXUP instead. A warning is printed if this happened as a result of a write to /proc/cpu/alignment. The user's UM_WARN bit (if present) is still honoured. * An alignment= argument from the kernel command-line is now honoured, except that the kernel will modify the specified mode as described above. This is allows modes such as UM_SIGNAL and UM_WARN to be active immediately from boot, which is useful for debugging purposes. Signed-off-by: Dave Martin --- arch/arm/mm/alignment.c | 42 ++++++++++++++++++++++++++++++------------ 1 files changed, 30 insertions(+), 12 deletions(-) diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index 724ba3b..a55c6fc 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -95,6 +95,33 @@ static const char *usermode_action[] = { "signal+warn" }; +/* Return true if and only if the ARMv6 unaligned access model is in use. */ +static bool cpu_is_v6_unaligned(void) +{ + return cpu_architecture() >= CPU_ARCH_ARMv6 && (cr_alignment & CR_U); +} + +static int safe_usermode(int new_usermode, bool warn) +{ + /* + * ARMv6 and later CPUs can perform unaligned accesses for + * most single load and store instructions up to word size. + * LDM, STM, LDRD and STRD still need to be handled. + * + * Ignoring the alignment fault is not an option on these + * CPUs since we spin re-faulting the instruction without + * making any progress. + */ + if (cpu_is_v6_unaligned() && !(new_usermode & (UM_FIXUP | UM_SIGNAL))) { + new_usermode |= UM_FIXUP; + + if (warn) + printk(KERN_WARNING "alignment: ignoring faults is unsafe on this CPU. Defaulting to fixup mode.\n"); + } + + return new_usermode; +} + static int alignment_proc_show(struct seq_file *m, void *v) { seq_printf(m, "User:\t\t%lu\n", ai_user); @@ -125,7 +152,7 @@ static ssize_t alignment_proc_write(struct file *file, const char __user *buffer if (get_user(mode, buffer)) return -EFAULT; if (mode >= '0' && mode <= '5') - ai_usermode = mode - '0'; + ai_usermode = safe_usermode(mode - '0', true) } return count; } @@ -923,20 +950,11 @@ static int __init alignment_init(void) return -ENOMEM; #endif - /* - * ARMv6 and later CPUs can perform unaligned accesses for - * most single load and store instructions up to word size. - * LDM, STM, LDRD and STRD still need to be handled. - * - * Ignoring the alignment fault is not an option on these - * CPUs since we spin re-faulting the instruction without - * making any progress. - */ - if (cpu_architecture() >= CPU_ARCH_ARMv6 && (cr_alignment & CR_U)) { + if (cpu_is_v6_unaligned()) { cr_alignment &= ~CR_A; cr_no_alignment &= ~CR_A; set_cr(cr_alignment); - ai_usermode = UM_FIXUP; + ai_usermode = safe_usermode(ai_usermode, false); } hook_fault_code(1, do_alignment, SIGBUS, BUS_ADRALN,