From patchwork Mon Sep 12 18:21:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 76020 Delivered-To: patches@linaro.org Received: by 10.140.106.72 with SMTP id d66csp981672qgf; Mon, 12 Sep 2016 11:21:30 -0700 (PDT) X-Received: by 10.55.89.195 with SMTP id n186mr20388206qkb.275.1473704489956; Mon, 12 Sep 2016 11:21:29 -0700 (PDT) Return-Path: Received: from mail-qk0-x22b.google.com (mail-qk0-x22b.google.com. [2607:f8b0:400d:c09::22b]) by mx.google.com with ESMTPS id n27si11979090qki.29.2016.09.12.11.21.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Sep 2016 11:21:29 -0700 (PDT) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c09::22b as permitted sender) client-ip=2607:f8b0:400d:c09::22b; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: domain of dave.long@linaro.org designates 2607:f8b0:400d:c09::22b as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by mail-qk0-x22b.google.com with SMTP id z190so140674964qkc.3 for ; Mon, 12 Sep 2016 11:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=KIMaQTej4nSw1fnl1khjqLOnVbSQxjzAxBFU4ZER55M=; b=ih27jYzmxUp9QvQ641SQ1unWKZY7QBJnOd2i+y1xfBflu54k/1ikYxiV1z4gM5LV6G UtLs4B69yhhBWg+N40ZsINS5UIkbTLGJiEWElVT0elFl1frV6frlEiNyHbrndC/KhPka 4rIkNYatN0gbiqSSfhSRiHaHQG9JhBf+GBFj4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=KIMaQTej4nSw1fnl1khjqLOnVbSQxjzAxBFU4ZER55M=; b=VB96+zqvCYAlbASWIbCQg3ex0i2rU1/7sO6adiKkoQBc6yuXuJgMd4pl1G6DWWrcNL lgvCEaIMRxvKR1sVWfGOe+5e38mgOgdj1p9nqhm5U2OPY66ByIe4NY91uEkNEhIUc2Fo jSUZ91G34YlhKSnn9WJTHMfzt1qaaAYRxGPs/y5t1lf5nhzazGnnC5Qh+wP2/dlLXr0+ 0ML55dcb5dbhtgJxkduFkPTkPfGmaaDlyUqYCRMfvPjMXPD4xGhxzT6SkeQJOQxmcigE Z/xHedZ3mXQL38PqUN2pjvRS/I06yYcLXuiEENcpQTgELA+/ifG7gdD+sjmK8KMpXK5p RvAA== X-Gm-Message-State: AE9vXwOXpaHxdsaAqxw38Nm0zr+amiUFYy49uuFTsMWuwOs7CaBo3pQ+0m7NeQD2zOTfELUmtc4= X-Received: by 10.55.163.196 with SMTP id m187mr10187285qke.40.1473704489728; Mon, 12 Sep 2016 11:21:29 -0700 (PDT) Return-Path: Received: from localhost.localdomain (pool-72-71-243-24.cncdnh.fast00.myfairpoint.net. [72.71.243.24]) by smtp.googlemail.com with ESMTPSA id o65sm11158777qkc.48.2016.09.12.11.21.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 12 Sep 2016 11:21:29 -0700 (PDT) From: David Long To: Masami Hiramatsu , Ananth N Mavinakayanahalli , Anil S Keshavamurthy , "David S. Miller" , Will Deacon , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, Sandeepa Prabhu , William Cohen , Pratyush Anand Cc: Mark Brown Subject: [PATCH v4] arm64: Improve kprobes test for atomic sequence Date: Mon, 12 Sep 2016 14:21:27 -0400 Message-Id: <1473704487-6069-1-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 From: "David A. Long" Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this, further limit the backwards search to stop if it hits a symbol address from kallsyms. The presumption is that this is the entry point to this code (particularly for the common case of placing probes at the beginning of functions). This also improves efficiency by not searching code that is not part of the function. There may be some possibility that the label might not denote the entry path to the probed instruction but the likelihood seems low and this is just another example of how the kprobes user really needs to be careful about what they are doing. Signed-off-by: David A. Long --- arch/arm64/kernel/probes/decode-insn.c | 48 ++++++++++++++++------------------ 1 file changed, 23 insertions(+), 25 deletions(-) -- 2.5.0 diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 37e47a9..d1731bf 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -122,7 +123,7 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi) static bool __kprobes is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end) { - while (scan_start > scan_end) { + while (scan_start >= scan_end) { /* * atomic region starts from exclusive load and ends with * exclusive store. @@ -142,33 +143,30 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi) { enum kprobe_insn decoded; kprobe_opcode_t insn = le32_to_cpu(*addr); - kprobe_opcode_t *scan_start = addr - 1; - kprobe_opcode_t *scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE; -#if defined(CONFIG_MODULES) && defined(MODULES_VADDR) - struct module *mod; -#endif - - if (addr >= (kprobe_opcode_t *)_text && - scan_end < (kprobe_opcode_t *)_text) - scan_end = (kprobe_opcode_t *)_text; -#if defined(CONFIG_MODULES) && defined(MODULES_VADDR) - else { - preempt_disable(); - mod = __module_address((unsigned long)addr); - if (mod && within_module_init((unsigned long)addr, mod) && - !within_module_init((unsigned long)scan_end, mod)) - scan_end = (kprobe_opcode_t *)mod->init_layout.base; - else if (mod && within_module_core((unsigned long)addr, mod) && - !within_module_core((unsigned long)scan_end, mod)) - scan_end = (kprobe_opcode_t *)mod->core_layout.base; - preempt_enable(); + kprobe_opcode_t *scan_end = NULL; + unsigned long size = 0, offset = 0; + + /* + * If there's a symbol defined in front of and near enough to + * the probe address assume it is the entry point to this + * code and use it to further limit how far back we search + * when determining if we're in an atomic sequence. If we could + * not find any symbol skip the atomic test altogether as we + * could otherwise end up searching irrelevant text/literals. + * KPROBES depends on KALLSYMS so this last case should never + * happen. + */ + if (kallsyms_lookup_size_offset((unsigned long) addr, &size, &offset)) { + if (offset < (MAX_ATOMIC_CONTEXT_SIZE*sizeof(kprobe_opcode_t))) + scan_end = addr - (offset / sizeof(kprobe_opcode_t)); + else + scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE; } -#endif decoded = arm_probe_decode_insn(insn, asi); - if (decoded == INSN_REJECTED || - is_probed_address_atomic(scan_start, scan_end)) - return INSN_REJECTED; + if (decoded != INSN_REJECTED && scan_end) + if (is_probed_address_atomic(addr - 1, scan_end)) + return INSN_REJECTED; return decoded; }