From patchwork Mon Sep 3 12:49:23 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 11151 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id A031823F22 for ; Mon, 3 Sep 2012 12:50:51 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id DF6D7A19063 for ; Mon, 3 Sep 2012 12:50:03 +0000 (UTC) Received: by iafj25 with SMTP id j25so7253968iaf.11 for ; Mon, 03 Sep 2012 05:50:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=9sxDYbcl5dcm546a8dsnmgUodijZNeYwVOBlpYQ2/50=; b=B3iTdyNgst/P4eYgj1T2FGsG4jh74Vxq76zBw2OMctCHjkxmLM2rQmP0laZTNTjKLh 41LeLYvBB1ILGIMpYCy3OQQYhwmymsMim2hlDX7QG7v3PvGe6jv6B3GD+MaMIoE83gZ2 6W8FKLZUbWxD9bMY9Wh0QRrcvHQMgBboMSJkNBJbe4qOsAU6N/8Q9+FbvwyGgedoLV+v 9uacWuh0fuiC2bieJ9jN2gYv9IPnK5e4ZRQL0xv0AvxSHOePGAb288mOjlS6vVi+orZy duimcQCewLQ47K3k/WWPdHTWP5FUgltwipjd8K2dTtZO1iMzpJUu8iZb0jGIx2x77n/1 Z9pQ== Received: by 10.50.180.129 with SMTP id do1mr10642319igc.28.1346676650556; Mon, 03 Sep 2012 05:50:50 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp135006igc; Mon, 3 Sep 2012 05:50:49 -0700 (PDT) Received: by 10.14.172.193 with SMTP id t41mr21373120eel.25.1346676648610; Mon, 03 Sep 2012 05:50:48 -0700 (PDT) Received: from mail-ey0-f178.google.com (mail-ey0-f178.google.com [209.85.215.178]) by mx.google.com with ESMTPS id c41si7730471eem.2.2012.09.03.05.50.48 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:50:48 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=209.85.215.178; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by eaac12 with SMTP id c12so1560374eaa.37 for ; Mon, 03 Sep 2012 05:50:48 -0700 (PDT) Received: by 10.14.172.129 with SMTP id t1mr21474233eel.34.1346676648017; Mon, 03 Sep 2012 05:50:48 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id y1sm36310671eel.0.2012.09.03.05.50.46 (version=SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:50:47 -0700 (PDT) From: Dave Martin To: patches@arm.linux.org.uk Cc: patches@linaro.org, Dave Martin , Stefano Stabellini , Marc Zyngier Subject: ARM: opcodes: Make opcode byteswapping macros assembly-compatible Date: Mon, 3 Sep 2012 13:49:23 +0100 Message-Id: <1346676569-4085-2-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1346676569-4085-1-git-send-email-dave.martin@linaro.org> References: <1346676569-4085-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQndvP4UKLAbv5DSiOMyE0J27Eqf4FyBrADNF4RWxWu/x3eoiuuxPH3s5DuJwxEHxTxIQeN/ Most of the existing macros don't work with assembler, due to the use of type casts and C functions from . This patch abstracts out those operations and provides simple explicit versions for use in assembly code. __opcode_is_thumb32() and __opcode_is_thumb16() are also converted to do bitmask-based testing to avoid confusion if these are used in assembly code (the assembler typically treats all arithmetic values as signed). These changes avoid the need for the compiler to pre-evaluate constant expressions used to generate opcodes. By ensuring that the forms of these expressions can be evaluated directly by the assembler, we can just stringify the expressions directly into the asm during the preprocessing pass. The alternative approach (passing the evaluated expression via an inline asm "i" constraint) gets painful because the contents of the asm and the constraints must be kept in sync. This makes the resulting macros awkward to use. Retaining the C forms of the macros allows more efficient code to be generated when opcodes are generated programmatically at run- time, but there is no way to embed run-time-generated opcodes in asm() blocks. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- KernelVersion: 3.6-rc4 diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 6bf54f9..f57e417 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -19,6 +19,33 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); /* + * Assembler opcode byteswap helpers. + * These are only intended for use by this header: don't use them directly, + * because they will be suboptimal in most cases. + */ +#define ___asm_opcode_swab32(x) ( \ + (((x) << 24) & 0xFF000000) \ + | (((x) << 8) & 0x00FF0000) \ + | (((x) >> 8) & 0x0000FF00) \ + | (((x) >> 24) & 0x000000FF) \ +) +#define ___asm_opcode_swab16(x) ( \ + (((x) << 8) & 0xFF00) \ + | (((x) >> 8) & 0x00FF) \ +) +#define ___asm_opcode_swahb32(x) ( \ + (((x) << 8) & 0xFF00FF00) \ + | (((x) >> 8) & 0x00FF00FF) \ +) +#define ___asm_opcode_swahw32(x) ( \ + (((x) << 16) & 0xFFFF0000) \ + | (((x) >> 16) & 0x0000FFFF) \ +) +#define ___asm_opcode_identity32(x) ((x) & 0xFFFFFFFF) +#define ___asm_opcode_identity16(x) ((x) & 0xFFFF) + + +/* * Opcode byteswap helpers * * These macros help with converting instructions between a canonical integer @@ -41,30 +68,58 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); * Note that values in the range 0x0000E800..0xE7FFFFFF intentionally do not * represent any valid Thumb-2 instruction. For this range, * __opcode_is_thumb32() and __opcode_is_thumb16() will both be false. + * + * The ___asm variants are intended only for use by this header, in situations + * involving inline assembler. For .S files, the normal __opcode_*() macros + * should do the right thing. */ +#ifdef __ASSEMBLY__ -#ifndef __ASSEMBLY__ +#define ___opcode_swab32(x) ___asm_opcode_swab32(x) +#define ___opcode_swab16(x) ___asm_opcode_swab16(x) +#define ___opcode_swahb32(x) ___asm_opcode_swahb32(x) +#define ___opcode_swahw32(x) ___asm_opcode_swahw32(x) +#define ___opcode_identity32(x) ___asm_opcode_identity32(x) +#define ___opcode_identity16(x) ___asm_opcode_identity16(x) + +#else /* ! __ASSEMBLY__ */ #include #include +#define ___opcode_swab32(x) swab32(x) +#define ___opcode_swab16(x) swab16(x) +#define ___opcode_swahb32(x) swahb32(x) +#define ___opcode_swahw32(x) swahw32(x) +#define ___opcode_identity32(x) ((u32)(x)) +#define ___opcode_identity16(x) ((u16)(x)) + +#endif /* ! __ASSEMBLY__ */ + + #ifdef CONFIG_CPU_ENDIAN_BE8 -#define __opcode_to_mem_arm(x) swab32(x) -#define __opcode_to_mem_thumb16(x) swab16(x) -#define __opcode_to_mem_thumb32(x) swahb32(x) +#define __opcode_to_mem_arm(x) ___opcode_swab32(x) +#define __opcode_to_mem_thumb16(x) ___opcode_swab16(x) +#define __opcode_to_mem_thumb32(x) ___opcode_swahb32(x) +#define ___asm_opcode_to_mem_arm(x) ___asm_opcode_swab32(x) +#define ___asm_opcode_to_mem_thumb16(x) ___asm_opcode_swab16(x) +#define ___asm_opcode_to_mem_thumb32(x) ___asm_opcode_swahb32(x) #else /* ! CONFIG_CPU_ENDIAN_BE8 */ -#define __opcode_to_mem_arm(x) ((u32)(x)) -#define __opcode_to_mem_thumb16(x) ((u16)(x)) +#define __opcode_to_mem_arm(x) ___opcode_identity32(x) +#define __opcode_to_mem_thumb16(x) ___opcode_identity16(x) +#define ___asm_opcode_to_mem_arm(x) ___asm_opcode_identity32(x) +#define ___asm_opcode_to_mem_thumb16(x) ___asm_opcode_identity16(x) #ifndef CONFIG_CPU_ENDIAN_BE32 /* * On BE32 systems, using 32-bit accesses to store Thumb instructions will not * work in all cases, due to alignment constraints. For now, a correct * version is not provided for BE32. */ -#define __opcode_to_mem_thumb32(x) swahw32(x) +#define __opcode_to_mem_thumb32(x) ___opcode_swahw32(x) +#define ___asm_opcode_to_mem_thumb32(x) ___asm_opcode_swahw32(x) #endif #endif /* ! CONFIG_CPU_ENDIAN_BE8 */ @@ -78,15 +133,27 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); /* Operations specific to Thumb opcodes */ /* Instruction size checks: */ -#define __opcode_is_thumb32(x) ((u32)(x) >= 0xE8000000UL) -#define __opcode_is_thumb16(x) ((u32)(x) < 0xE800UL) +#define __opcode_is_thumb32(x) ( \ + ((x) & 0xF8000000) == 0xE8000000 \ + || ((x) & 0xF0000000) == 0xF0000000 \ +) +#define __opcode_is_thumb16(x) ( \ + ((x) & 0xFFFF0000) == 0 \ + && !(((x) & 0xF800) == 0xE800 || ((x) & 0xF000) == 0xF000) \ +) /* Operations to construct or split 32-bit Thumb instructions: */ -#define __opcode_thumb32_first(x) ((u16)((x) >> 16)) -#define __opcode_thumb32_second(x) ((u16)(x)) -#define __opcode_thumb32_compose(first, second) \ - (((u32)(u16)(first) << 16) | (u32)(u16)(second)) - -#endif /* __ASSEMBLY__ */ +#define __opcode_thumb32_first(x) (___opcode_identity16((x) >> 16)) +#define __opcode_thumb32_second(x) (___opcode_identity16(x)) +#define __opcode_thumb32_compose(first, second) ( \ + (___opcode_identity32(___opcode_identity16(first)) << 16) \ + | ___opcode_identity32(___opcode_identity16(second)) \ +) +#define ___asm_opcode_thumb32_first(x) (___asm_opcode_identity16((x) >> 16)) +#define ___asm_opcode_thumb32_second(x) (___asm_opcode_identity16(x)) +#define ___asm_opcode_thumb32_compose(first, second) ( \ + (___asm_opcode_identity32(___asm_opcode_identity16(first)) << 16) \ + | ___asm_opcode_identity32(___asm_opcode_identity16(second)) \ +) #endif /* __ASM_ARM_OPCODES_H */