From patchwork Fri Aug 18 11:26:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 110373 Delivered-To: patch@linaro.org Received: by 10.140.95.78 with SMTP id h72csp780465qge; Fri, 18 Aug 2017 04:27:02 -0700 (PDT) X-Received: by 10.101.70.65 with SMTP id k1mr8273253pgr.39.1503055622885; Fri, 18 Aug 2017 04:27:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1503055622; cv=none; d=google.com; s=arc-20160816; b=Kwfk3M0wB0ENirbA3ZUec4nYfdvbDIjrBwyj2TjWuwtdP1wrVeDoQth3HjRRhBhLuF tiH1sSo23EcNqB8FmPuDJCOPNJ90bnHOy8Ws376y+KhitUVsucUb6SvZT4LdiZ8RFiG8 sdslphe5u1UckPrrLWdWffXKMQWTy2CFvO+tbSe9Vci0fSumfI713d99GkuvfnqBQVKp Va90UutIea0WR0Um4KjV8i3gSBf/lusf4q76h6Dk1Cn1WzanrxY33Jw/LOpz1GWfRkGT klhUjIppsjdcUyhxQTMbhOFl4hYe0sCgpyFBJCHZgYv7YaquJi9I2iw8BPxUzcR4uZwg rUNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=zCfiGBBgVMmB6o1MlGFOpJc/LA60dRcsP4ljxd8qPQY=; b=YrdMxS2lOnItFfezLDH4VTFe4itYPLZlYnKaiO8pJY+RgoVO5aWDkqoCsDLdYz1qiY 3V/dRXEHnV0WHkEgHN6Eomu6H+k8moHDMkSPoagbZd3ww1Cg3I1oOpiT5B77jggRscwF /ifWuTmSj/zIHIMGGjVHJxjs6Jjk2/9j2H+3AjPCIlLKoW2zpIHdr+5TlmDge/TX0iUY UZS6obmSDwLSzrK/KN5XB+pOj+HrMVQCL76Dw2BCLCX8UcDjK/BB96QKCuhoVhRdYbwy daEL/k8RgcixzZN7jRupqjJP6nEgrEVQE4yU/4OnJHyjksLTISC7+lAQ8qIajrTEucoJ +x0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=B0NGrNxk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o4si3189095pgq.683.2017.08.18.04.27.02; Fri, 18 Aug 2017 04:27:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=B0NGrNxk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751807AbdHRL1A (ORCPT + 26 others); Fri, 18 Aug 2017 07:27:00 -0400 Received: from mail-wr0-f170.google.com ([209.85.128.170]:36677 "EHLO mail-wr0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751695AbdHRL0x (ORCPT ); Fri, 18 Aug 2017 07:26:53 -0400 Received: by mail-wr0-f170.google.com with SMTP id f8so24198440wrf.3 for ; Fri, 18 Aug 2017 04:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zCfiGBBgVMmB6o1MlGFOpJc/LA60dRcsP4ljxd8qPQY=; b=B0NGrNxkeyc8wHuNM9jhlpQCo2Qqp2fNysqVxjqNUoqD7eCioCKgMP8KVjBywEURJk TD5+wspa7bS+f7bvxF8Bi2ykUqenethPOJKZqki2tD4aK7KCl36+2lXYca58QWjDZLbc ZY5mZerj+7FM7EGGhGT5dGVvpKl+pDPAPJ92U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zCfiGBBgVMmB6o1MlGFOpJc/LA60dRcsP4ljxd8qPQY=; b=eV42a/Z2bQwHk4wHilMiTTlI/xnSTEZyG4AIe0XoIi4h9H4ygjF7mzxSZDyh1/nbVT agDWnEPt5vOWkfUuNB6PGnkzKUqhnWHRKd1i7wbaFzPa4SfqKmN+ma/g14j52avoKtwl xWAhRxsa6tsHSgr+rTKyBg+2QJCv1DACEue5aQgbvG0R5Xku8yHVTu/u7u6IDX0SnvAJ QadlrFnn7spbWnUgG5JMx1yLEaBLE+0dYYzo0Uu8gas6nTP1tc+bDn8ue3g/vWabKxvt 0yrjmuLorxnxTo5CUepuTuyAKQnNEwZ9pCjcaoNDjUdhqJL+pnXbLsHcNEW0o6+PXhUr l/Nw== X-Gm-Message-State: AHYfb5iBPOoZ2CiWud1nylO885L2VLvD8oqF+aynEkVEqV0LC/WCrtu8 ElpL37oPgyDXlh1cyYepTw== X-Received: by 10.28.164.66 with SMTP id n63mr1192008wme.82.1503055612271; Fri, 18 Aug 2017 04:26:52 -0700 (PDT) Received: from localhost.localdomain ([154.146.161.128]) by smtp.gmail.com with ESMTPSA id f4sm1047111wmh.28.2017.08.18.04.26.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Aug 2017 04:26:51 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: Ard Biesheuvel , "H. Peter Anvin" , Arnd Bergmann , Heiko Carstens , Kees Cook , Will Deacon , Michael Ellerman , Thomas Garnier , Thomas Gleixner , "Serge E. Hallyn" , Bjorn Helgaas , Benjamin Herrenschmidt , Paul Mackerras , Catalin Marinas , Petr Mladek , Ingo Molnar , James Morris , Andrew Morton , Joe Perches , Nicolas Pitre , Steven Rostedt , Martin Schwidefsky , Sergey Senozhatsky , Linus Torvalds , Andy Whitcroft , Jessica Yu , Ingo Molnar Subject: [PATCH v2 2/6] module: use relative references for __ksymtab entries Date: Fri, 18 Aug 2017 12:26:20 +0100 Message-Id: <20170818112624.24991-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170818112624.24991-1-ard.biesheuvel@linaro.org> References: <20170818112624.24991-1-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries, each consisting of two 64-bit fields containing absolute references, to the symbol itself and to a char array containing its name, respectively. When we build the same configuration with KASLR enabled, we end up with an additional ~192 KB of relocations in the .init section, i.e., one 24 byte entry for each absolute reference, which all need to be processed at boot time. Given how the struct kernel_symbol that describes each entry is completely local to module.c (except for the references emitted by EXPORT_SYMBOL() itself), we can easily modify it to contain two 32-bit relative references instead. This reduces the size of the __ksymtab section by 50% for all 64-bit architectures, and gets rid of the runtime relocations entirely for architectures implementing KASLR, either via standard PIE linking (arm64) or using custom host tools (x86). Note that the binary search involving __ksymtab contents relies on each section being sorted by symbol name. This is implemented based on the input section names, not the names in the ksymtab entries, so this patch does not interfere with that. Given that the use of place-relative relocations requires support both in the toolchain and in the module loader, we cannot enable this feature for all architectures. So make it dependend on whether CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined. Cc: Jessica Yu Cc: Arnd Bergmann Cc: Andrew Morton Cc: Ingo Molnar Cc: Kees Cook Cc: Thomas Garnier Cc: Nicolas Pitre Signed-off-by: Ard Biesheuvel --- arch/x86/include/asm/Kbuild | 1 + arch/x86/include/asm/export.h | 4 -- include/asm-generic/export.h | 12 +++- include/linux/compiler.h | 11 ++++ include/linux/export.h | 68 ++++++++++++++++---- kernel/module.c | 14 ++-- 6 files changed, 86 insertions(+), 24 deletions(-) -- 2.11.0 diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild index 5d6a53fd7521..3e8a88dcaa1d 100644 --- a/arch/x86/include/asm/Kbuild +++ b/arch/x86/include/asm/Kbuild @@ -9,5 +9,6 @@ generated-y += xen-hypercalls.h generic-y += clkdev.h generic-y += dma-contiguous.h generic-y += early_ioremap.h +generic-y += export.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h diff --git a/arch/x86/include/asm/export.h b/arch/x86/include/asm/export.h deleted file mode 100644 index 138de56b13eb..000000000000 --- a/arch/x86/include/asm/export.h +++ /dev/null @@ -1,4 +0,0 @@ -#ifdef CONFIG_64BIT -#define KSYM_ALIGN 16 -#endif -#include diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h index 719db1968d81..97ce606459ae 100644 --- a/include/asm-generic/export.h +++ b/include/asm-generic/export.h @@ -5,12 +5,10 @@ #define KSYM_FUNC(x) x #endif #ifdef CONFIG_64BIT -#define __put .quad #ifndef KSYM_ALIGN #define KSYM_ALIGN 8 #endif #else -#define __put .long #ifndef KSYM_ALIGN #define KSYM_ALIGN 4 #endif @@ -25,6 +23,16 @@ #define KSYM(name) name #endif +.macro __put, val, name +#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS + .long \val - ., \name - . +#elif defined(CONFIG_64BIT) + .quad \val, \name +#else + .long \val, \name +#endif +.endm + /* * note on .section use: @progbits vs %progbits nastiness doesn't matter, * since we immediately emit into those sections anyway. diff --git a/include/linux/compiler.h b/include/linux/compiler.h index eca8ad75e28b..3e0b707664b1 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -590,4 +590,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s (_________p1); \ }) +/* + * Force the compiler to emit 'sym' as a symbol, so that we can reference + * it from inline assembler. Necessary in case 'sym' could be inlined + * otherwise, or eliminated entirely due to lack of references that are + * visibile to the compiler. + */ +#define __ADDRESSABLE(sym) \ + static void *__attribute__((section(".discard.text"))) __used \ + __PASTE(__discard_##sym, __LINE__)(void) \ + { return (void *)&sym; } \ + #endif /* __LINUX_COMPILER_H */ diff --git a/include/linux/export.h b/include/linux/export.h index 1a1dfdb2a5c6..896883a44be3 100644 --- a/include/linux/export.h +++ b/include/linux/export.h @@ -24,12 +24,6 @@ #define VMLINUX_SYMBOL_STR(x) __VMLINUX_SYMBOL_STR(x) #ifndef __ASSEMBLY__ -struct kernel_symbol -{ - unsigned long value; - const char *name; -}; - #ifdef MODULE extern struct module __this_module; #define THIS_MODULE (&__this_module) @@ -60,17 +54,67 @@ extern struct module __this_module; #define __CRC_SYMBOL(sym, sec) #endif +#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS +/* + * Emit the ksymtab entry as a pair of relative references: this reduces + * the size by half on 64-bit architectures, and eliminates the need for + * absolute relocations that require runtime processing on relocatable + * kernels. + */ +#define __KSYMTAB_ENTRY(sym, sec) \ + __ADDRESSABLE(sym) \ + asm(" .section \"___ksymtab" sec "+" #sym "\", \"a\"\n" \ + " .balign 8\n" \ + VMLINUX_SYMBOL_STR(__ksymtab_##sym) ":\n" \ + " .long " VMLINUX_SYMBOL_STR(sym) "- .\n" \ + " .long " VMLINUX_SYMBOL_STR(__kstrtab_##sym) "- .\n" \ + " .previous\n") + +struct kernel_symbol { + int value_offset; + int name_offset; +}; + +static inline unsigned long kernel_symbol_value(const struct kernel_symbol *sym) +{ + return (unsigned long)&sym->value_offset + sym->value_offset; +} + +static inline const char *kernel_symbol_name(const struct kernel_symbol *sym) +{ + return (const char *)((unsigned long)&sym->name_offset + + sym->name_offset); +} +#else +#define __KSYMTAB_ENTRY(sym, sec) \ + static const struct kernel_symbol __ksymtab_##sym \ + __attribute__((section("___ksymtab" sec "+" #sym), used)) \ + = { (unsigned long)&sym, __kstrtab_##sym } + +struct kernel_symbol { + unsigned long value; + const char *name; +}; + +static inline unsigned long kernel_symbol_value(const struct kernel_symbol *sym) +{ + return sym->value; +} + +static inline const char *kernel_symbol_name(const struct kernel_symbol *sym) +{ + return sym->name; +} +#endif + /* For every exported symbol, place a struct in the __ksymtab section */ #define ___EXPORT_SYMBOL(sym, sec) \ extern typeof(sym) sym; \ __CRC_SYMBOL(sym, sec) \ - static const char __kstrtab_##sym[] \ - __attribute__((section("__ksymtab_strings"), aligned(1))) \ + static const char __kstrtab_##sym[] __used __aligned(1) \ + __attribute__((section("__ksymtab_strings"))) \ = VMLINUX_SYMBOL_STR(sym); \ - static const struct kernel_symbol __ksymtab_##sym \ - __used \ - __attribute__((section("___ksymtab" sec "+" #sym), used)) \ - = { (unsigned long)&sym, __kstrtab_##sym } + __KSYMTAB_ENTRY(sym, sec) #if defined(__KSYM_DEPS__) diff --git a/kernel/module.c b/kernel/module.c index 40f983cbea81..904c3634fdd0 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -544,7 +544,7 @@ static int cmp_name(const void *va, const void *vb) const char *a; const struct kernel_symbol *b; a = va; b = vb; - return strcmp(a, b->name); + return strcmp(a, kernel_symbol_name(b)); } static bool find_symbol_in_section(const struct symsearch *syms, @@ -2190,7 +2190,7 @@ void *__symbol_get(const char *symbol) sym = NULL; preempt_enable(); - return sym ? (void *)sym->value : NULL; + return sym ? (void *)kernel_symbol_value(sym) : NULL; } EXPORT_SYMBOL_GPL(__symbol_get); @@ -2220,10 +2220,12 @@ static int verify_export_symbols(struct module *mod) for (i = 0; i < ARRAY_SIZE(arr); i++) { for (s = arr[i].sym; s < arr[i].sym + arr[i].num; s++) { - if (find_symbol(s->name, &owner, NULL, true, false)) { + if (find_symbol(kernel_symbol_name(s), &owner, NULL, + true, false)) { pr_err("%s: exports duplicate symbol %s" " (owned by %s)\n", - mod->name, s->name, module_name(owner)); + mod->name, kernel_symbol_name(s), + module_name(owner)); return -ENOEXEC; } } @@ -2272,7 +2274,7 @@ static int simplify_symbols(struct module *mod, const struct load_info *info) ksym = resolve_symbol_wait(mod, info, name); /* Ok if resolved. */ if (ksym && !IS_ERR(ksym)) { - sym[i].st_value = ksym->value; + sym[i].st_value = kernel_symbol_value(ksym); break; } @@ -2532,7 +2534,7 @@ static int is_exported(const char *name, unsigned long value, ks = lookup_symbol(name, __start___ksymtab, __stop___ksymtab); else ks = lookup_symbol(name, mod->syms, mod->syms + mod->num_syms); - return ks != NULL && ks->value == value; + return ks != NULL && kernel_symbol_value(ks) == value; } /* As per nm */