From patchwork Mon Sep 3 12:49:24 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 11152 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 1C84B23F22 for ; Mon, 3 Sep 2012 12:51:14 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 77AB7A19061 for ; Mon, 3 Sep 2012 12:50:26 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id j25so7253968iaf.11 for ; Mon, 03 Sep 2012 05:51:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=JipfDiEmXTQncGlCPYJ8vgGaNN+CMV2G1k4Jk6FyCcg=; b=EIoRwmfbUpNEflK77KXiKboC+RYH1AU3TwQgdXFfFHPAVq5Xm/vENNivibz/UfXYsD 5R8Weg7La5dk4M8Fp5A1IgWEe7u3Bn9JGyE6/yZ0OTUKL71/QP1XR3xGho1GDA2551Jt cT3zCjmGDvxxNrCQ45Lk+WedBeSEVQuchLkv5+MmypqKlqBy7T4a/xj820xQM839ZISj cIwnCZD7WkGpvYCBGGrLMA0vi/roMUfYH6Nei3rLwl2K/JE8IKZ0sE1OM03eYYL1OOMS 3b/KMR4f4p+lZDnF1UzcjnsSdW8HI2fdxXkX+7wTsteCVXmjEPpss+1iB2924JUWHFqh gddw== Received: by 10.50.242.3 with SMTP id wm3mr10722240igc.0.1346676673291; Mon, 03 Sep 2012 05:51:13 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp135028igc; Mon, 3 Sep 2012 05:51:12 -0700 (PDT) Received: by 10.14.172.193 with SMTP id t41mr21374658eel.25.1346676671778; Mon, 03 Sep 2012 05:51:11 -0700 (PDT) Received: from mail-ee0-f50.google.com (mail-ee0-f50.google.com [74.125.83.50]) by mx.google.com with ESMTPS id c41si7720563eem.50.2012.09.03.05.51.11 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:51:11 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) client-ip=74.125.83.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.83.50 is neither permitted nor denied by best guess record for domain of dave.martin@linaro.org) smtp.mail=dave.martin@linaro.org Received: by eekc50 with SMTP id c50so1972354eek.37 for ; Mon, 03 Sep 2012 05:51:11 -0700 (PDT) Received: by 10.14.172.193 with SMTP id t41mr21374620eel.25.1346676671023; Mon, 03 Sep 2012 05:51:11 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id y1sm36310671eel.0.2012.09.03.05.51.09 (version=SSLv3 cipher=OTHER); Mon, 03 Sep 2012 05:51:10 -0700 (PDT) From: Dave Martin To: patches@arm.linux.org.uk Cc: patches@linaro.org, Dave Martin , Stefano Stabellini , Marc Zyngier Subject: ARM: opcodes: Add helpers for emitting custom opcodes Date: Mon, 3 Sep 2012 13:49:24 +0100 Message-Id: <1346676569-4085-3-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1346676569-4085-1-git-send-email-dave.martin@linaro.org> References: <1346676569-4085-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQkWPkUOOZ5T3pG4p0mOOnunC1Ny05TDo+ks2vN0QBvBRmsf7N/+JaeEHVWKSsKW9OeLQMew This patch adds some __inst_() macros for injecting custom opcodes in assembler (both inline and in .S files). They should make it easier and cleaner to get things right in little-/big- endian/ARM/Thumb-2 kernels without a lot of #ifdefs. This pure-preprocessor approach is preferred over the alternative method of wedging extra assembler directives into the assembler input using top-level asm() blocks, since there is no way to guarantee that the compiler won't reorder those with respect to each other or with respect to non-toplevel asm() blocks, unless -fno-toplevel-reorder is passed (which is in itself somewhat undesirable because it defeats some potential optimisations). Currently _does_ silently rely on the compiler not reordering at the top level, but it seems better to avoid adding extra code which depends on this if the same result can be achieved in another way. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- KernelVersion: 3.6-rc4 diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index f57e417..f7937e1 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -156,4 +156,73 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); | ___asm_opcode_identity32(___asm_opcode_identity16(second)) \ ) +/* + * Opcode injection helpers + * + * In rare cases it is necessary to assemble an opcode which the + * assembler does not support directly, or which would normally be + * rejected because of the CFLAGS or AFLAGS used to build the affected + * file. + * + * Before using these macros, consider carefully whether it is feasible + * instead to change the build flags for your file, or whether it really + * makes sense to support old assembler versions when building that + * particular kernel feature. + * + * The macros defined here should only be used where there is no viable + * alternative. + * + * + * __inst_arm(x): emit the specified ARM opcode + * __inst_thumb16(x): emit the specified 16-bit Thumb opcode + * __inst_thumb32(x): emit the specified 32-bit Thumb opcode + * + * __inst_arm_thumb16(arm, thumb): emit either the specified arm or + * 16-bit Thumb opcode, depending on whether an ARM or Thumb-2 + * kernel is being built + * + * __inst_arm_thumb32(arm, thumb): emit either the specified arm or + * 32-bit Thumb opcode, depending on whether an ARM or Thumb-2 + * kernel is being built + * + * + * Note that using these macros directly is poor practice. Instead, you + * should use them to define human-readable wrapper macros to encode the + * instructions that you care about. In code which might run on ARMv7 or + * above, you can usually use the __inst_arm_thumb{16,32} macros to + * specify the ARM and Thumb alternatives at the same time. This ensures + * that the correct opcode gets emitted depending on the instruction set + * used for the kernel build. + */ +#include + +#define __inst_arm(x) ___inst_arm(___asm_opcode_to_mem_arm(x)) +#define __inst_thumb32(x) ___inst_thumb32( \ + ___asm_opcode_to_mem_thumb16(___asm_opcode_thumb32_first(x)), \ + ___asm_opcode_to_mem_thumb16(___asm_opcode_thumb32_second(x)) \ +) +#define __inst_thumb16(x) ___inst_thumb16(___asm_opcode_to_mem_thumb16(x)) + +#ifdef CONFIG_THUMB2_KERNEL +#define __inst_arm_thumb16(arm_opcode, thumb_opcode) \ + __inst_thumb16(thumb_opcode) +#define __inst_arm_thumb32(arm_opcode, thumb_opcode) \ + __inst_thumb32(thumb_opcode) +#else +#define __inst_arm_thumb16(arm_opcode, thumb_opcode) __inst_arm(arm_opcode) +#define __inst_arm_thumb32(arm_opcode, thumb_opcode) __inst_arm(arm_opcode) +#endif + +/* Helpers for the helpers. Don't use these directly. */ +#ifdef __ASSEMBLY__ +#define ___inst_arm(x) .long x +#define ___inst_thumb16(x) .short x +#define ___inst_thumb32(first, second) .short first, second +#else +#define ___inst_arm(x) ".long " __stringify(x) "\n\t" +#define ___inst_thumb16(x) ".short " __stringify(x) "\n\t" +#define ___inst_thumb32(first, second) \ + ".short " __stringify(first) ", " __stringify(second) "\n\t" +#endif + #endif /* __ASM_ARM_OPCODES_H */