From patchwork Thu Mar 21 17:19:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 160814 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1044084jan; Thu, 21 Mar 2019 10:19:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqwLkVrBJeB44cpoSIOreNEhpZxFusy7gssZUgjCnmvDGqU6VZdTrFbo9saU0LNlkhiTp8gF X-Received: by 2002:a65:5acc:: with SMTP id d12mr4363165pgt.337.1553188770665; Thu, 21 Mar 2019 10:19:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553188770; cv=none; d=google.com; s=arc-20160816; b=rIYvYm9J2KNMODkLkKnQEK2lmK+Z4jDourCGs/kF+2+UuRSh2pKPCg/lejmQvagv93 dyL8fvIk/iGwyV3MR614uK2v2KpIN7LoCun9s/ztccmXniEj+7sh9lJEZlcu45uBu4nO ebonoWH91W+6Io56zdAYAldi43yp7l66O6g8/xlL9MEAeSerWekIr+dhrUUE9Pb1PY0+ hVt12fDUI1T5LG0w2XzRqp4t5nHUx9M3yKLjVgPvZor2Uw0HD3uRRQ+Yy6ncmRysiFhI BckraZvQKBwoTfWQWuR7f5tMYBUFzUyntyGHT7axCM5ZiLG3VVrTo0UK54iPsvv33opS kJ2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=jmfx/JSXYecJsOvctr+SJKXE4hPgzOWGP3I1Kbn2drM=; b=Ix8TU7Nt4Yk+ImDhj+1uqDsxezwde+4nvwa6iIQByEdqjRrQ0WbmXFyLwB9OcfhgDS QjqP/3dlUa4WCKPkz5qo8d/qbtRYINB8UqG0xcBNKVIn1annI8JRvAMCgdSYVfpCPOr+ buMdODTIj0nqBcLMfq+PvKqByyYnU2oZ7c+2UeEP3efNCIQUT/ZKu+L2Eu7E9RyuU0s6 LmA+ch/wp3fXlFzSsvdwekgs50PT0CFh4Ue3EZk6TuMyjGGk9d7ZlYkQvnZVasq/Nx7Z R7dSnDyNgTWcjDG7exqz+W+/m19dzSzKWGwix0025Q9eMpNfeSk77U0s1/2TUr/UIubC tsgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g30si5172889plg.102.2019.03.21.10.19.30; Thu, 21 Mar 2019 10:19:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728855AbfCURT2 (ORCPT + 31 others); Thu, 21 Mar 2019 13:19:28 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60424 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727958AbfCURT1 (ORCPT ); Thu, 21 Mar 2019 13:19:27 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C606374; Thu, 21 Mar 2019 10:19:27 -0700 (PDT) Received: from arrakis.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 853EB3F614; Thu, 21 Mar 2019 10:19:25 -0700 (PDT) From: Catalin Marinas To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Michael Ellerman , Qian Cai Subject: [PATCH] kmemleak: powerpc: skip scanning holes in the .bss section Date: Thu, 21 Mar 2019 17:19:17 +0000 Message-Id: <20190321171917.62049-1-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The commit 2d4f567103ff ("KVM: PPC: Introduce kvm_tmp framework") adds kvm_tmp[] into the .bss section and then free the rest of unused spaces back to the page allocator. kernel_init kvm_guest_init kvm_free_tmp free_reserved_area free_unref_page free_unref_page_prepare With DEBUG_PAGEALLOC=y, it will unmap those pages from kernel. As the result, kmemleak scan will trigger a panic when it scans the .bss section with unmapped pages. This patch creates dedicated kmemleak objects for the .data, .bss and potentially .data..ro_after_init sections to allow partial freeing via the kmemleak_free_part() in the powerpc kvm_free_tmp() function. Acked-by: Michael Ellerman (powerpc) Reported-by: Qian Cai Signed-off-by: Catalin Marinas --- Posting as a proper patch following the inlined one here: http://lkml.kernel.org/r/20190320181656.GB38229@arrakis.emea.arm.com Changes from the above: - Added comment to the powerpc kmemleak_free_part() call - Only register the .data..ro_after_init in kmemleak if not contained within the .data sections (which seems to be the case for lots of architectures) I preserved part of Qian's original commit message but changed the author since I rewrote the patch. arch/powerpc/kernel/kvm.c | 7 +++++++ mm/kmemleak.c | 16 +++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) Tested-by: Qian Cai diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c index 683b5b3805bd..cd381e2291df 100644 --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -712,6 +713,12 @@ static void kvm_use_magic_page(void) static __init void kvm_free_tmp(void) { + /* + * Inform kmemleak about the hole in the .bss section since the + * corresponding pages will be unmapped with DEBUG_PAGEALLOC=y. + */ + kmemleak_free_part(&kvm_tmp[kvm_tmp_index], + ARRAY_SIZE(kvm_tmp) - kvm_tmp_index); free_reserved_area(&kvm_tmp[kvm_tmp_index], &kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL); } diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 707fa5579f66..6c318f5ac234 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1529,11 +1529,6 @@ static void kmemleak_scan(void) } rcu_read_unlock(); - /* data/bss scanning */ - scan_large_block(_sdata, _edata); - scan_large_block(__bss_start, __bss_stop); - scan_large_block(__start_ro_after_init, __end_ro_after_init); - #ifdef CONFIG_SMP /* per-cpu sections scanning */ for_each_possible_cpu(i) @@ -2071,6 +2066,17 @@ void __init kmemleak_init(void) } local_irq_restore(flags); + /* register the data/bss sections */ + create_object((unsigned long)_sdata, _edata - _sdata, + KMEMLEAK_GREY, GFP_ATOMIC); + create_object((unsigned long)__bss_start, __bss_stop - __bss_start, + KMEMLEAK_GREY, GFP_ATOMIC); + /* only register .data..ro_after_init if not within .data */ + if (__start_ro_after_init < _sdata || __end_ro_after_init > _edata) + create_object((unsigned long)__start_ro_after_init, + __end_ro_after_init - __start_ro_after_init, + KMEMLEAK_GREY, GFP_ATOMIC); + /* * This is the point where tracking allocations is safe. Automatic * scanning is started during the late initcall. Add the early logged