From patchwork Tue Oct 20 00:47:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zefan Li X-Patchwork-Id: 55272 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by patches.linaro.org (Postfix) with ESMTPS id D24E622EA2 for ; Tue, 20 Oct 2015 00:57:44 +0000 (UTC) Received: by wicfg8 with SMTP id fg8sf2514781wic.0 for ; Mon, 19 Oct 2015 17:57:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=qEnzYWLwd7pUaRF7gby1IzlYFwZp+Yr0jJhU/Heih+w=; b=Hz+E1n8I6VXCPFpbXlKEsz5o/+78o3uNLPEFidWlZwHirxZBxNuYTX4yPi9L9b03q1 Hr000q8xK592/kFhyx9ad2kq14ZBb4xdxCWdggEQNgKYYrHoiJrpAEvY2BMLPaokwm/q NYupR1VqMpqmh6Ydc6i3iM038vieDND3S3XN4+S3vHCDbLe3GCGh/l3xvK2grZSYh6ha c2g8kplh8IFHfO3Mjr3/MDdI/sKWivBojFjcjHyj2PJIPq5HMKurXdIPGiMEPxLnF+ay vzf1wgxLIm5u+EUOi7QRD1Jo+Dza6PaCe8IzaxDyBnNbAw8sqy7Q5M41UG1ybudVI0eu 78ig== X-Gm-Message-State: ALoCoQnzUHV6QgIP7pM9G3ARRmphRkD6BKPOwGdTcFMYiH8QgiRc1StuEmteTA1lgYRaE8Fy5AtH X-Received: by 10.180.35.132 with SMTP id h4mr4752154wij.5.1445302664192; Mon, 19 Oct 2015 17:57:44 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.153.65 with SMTP id b62ls6907lfe.57.gmail; Mon, 19 Oct 2015 17:57:43 -0700 (PDT) X-Received: by 10.112.138.37 with SMTP id qn5mr164972lbb.52.1445302663892; Mon, 19 Oct 2015 17:57:43 -0700 (PDT) Received: from mail-lb0-f173.google.com (mail-lb0-f173.google.com. [209.85.217.173]) by mx.google.com with ESMTPS id mq10si336492lbb.24.2015.10.19.17.57.43 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Oct 2015 17:57:43 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) client-ip=209.85.217.173; Received: by lbbpp2 with SMTP id pp2so2277995lbb.0 for ; Mon, 19 Oct 2015 17:57:43 -0700 (PDT) X-Received: by 10.112.202.35 with SMTP id kf3mr155189lbc.19.1445302663747; Mon, 19 Oct 2015 17:57:43 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1779359lbq; Mon, 19 Oct 2015 17:57:42 -0700 (PDT) X-Received: by 10.68.57.175 with SMTP id j15mr532734pbq.34.1445302662686; Mon, 19 Oct 2015 17:57:42 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id kb6si775914pad.33.2015.10.19.17.57.42; Mon, 19 Oct 2015 17:57:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753332AbbJTA5k (ORCPT + 28 others); Mon, 19 Oct 2015 20:57:40 -0400 Received: from mail.kernel.org ([198.145.29.136]:54965 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752305AbbJTAvI (ORCPT ); Mon, 19 Oct 2015 20:51:08 -0400 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 597A6203B4; Tue, 20 Oct 2015 00:51:07 +0000 (UTC) Received: from lizf-HP-Pavilion-m4-Notebook-PC.Home (unknown [183.247.163.231]) (using TLSv1.2 with cipher AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B6067203AB; Tue, 20 Oct 2015 00:51:04 +0000 (UTC) From: lizf@kernel.org To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Catalin Marinas , Andrew Morton , Linus Torvalds , Zefan Li Subject: [PATCH 3.4 44/65] mm: kmemleak: allow safe memory scanning during kmemleak disabling Date: Tue, 20 Oct 2015 08:47:54 +0800 Message-Id: <1445302095-4695-44-git-send-email-lizf@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1445302030-4607-1-git-send-email-lizf@kernel.org> References: <1445302030-4607-1-git-send-email-lizf@kernel.org> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: linux-kernel-owner@vger.kernel.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.173 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Catalin Marinas 3.4.110-rc1 review patch. If anyone has any objections, please let me know. ------------------ commit c5f3b1a51a591c18c8b33983908e7fdda6ae417e upstream. The kmemleak scanning thread can run for minutes. Callbacks like kmemleak_free() are allowed during this time, the race being taken care of by the object->lock spinlock. Such lock also prevents a memory block from being freed or unmapped while it is being scanned by blocking the kmemleak_free() -> ... -> __delete_object() function until the lock is released in scan_object(). When a kmemleak error occurs (e.g. it fails to allocate its metadata), kmemleak_enabled is set and __delete_object() is no longer called on freed objects. If kmemleak_scan is running at the same time, kmemleak_free() no longer waits for the object scanning to complete, allowing the corresponding memory block to be freed or unmapped (in the case of vfree()). This leads to kmemleak_scan potentially triggering a page fault. This patch separates the kmemleak_free() enabling/disabling from the overall kmemleak_enabled nob so that we can defer the disabling of the object freeing tracking until the scanning thread completed. The kmemleak_free_part() is deliberately ignored by this patch since this is only called during boot before the scanning thread started. Signed-off-by: Catalin Marinas Reported-by: Vignesh Radhakrishnan Tested-by: Vignesh Radhakrishnan Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [lizf: Backported to 3.4: adjust context] Signed-off-by: Zefan Li --- mm/kmemleak.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index ad6ee88..c74827c 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -193,6 +193,8 @@ static struct kmem_cache *scan_area_cache; /* set if tracing memory operations is enabled */ static atomic_t kmemleak_enabled = ATOMIC_INIT(0); +/* same as above but only for the kmemleak_free() callback */ +static int kmemleak_free_enabled; /* set in the late_initcall if there were no errors */ static atomic_t kmemleak_initialized = ATOMIC_INIT(0); /* enables or disables early logging of the memory operations */ @@ -936,7 +938,7 @@ void __ref kmemleak_free(const void *ptr) { pr_debug("%s(0x%p)\n", __func__, ptr); - if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr)) + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr)) delete_object_full((unsigned long)ptr); else if (atomic_read(&kmemleak_early_log)) log_early(KMEMLEAK_FREE, ptr, 0, 0); @@ -976,7 +978,7 @@ void __ref kmemleak_free_percpu(const void __percpu *ptr) pr_debug("%s(0x%p)\n", __func__, ptr); - if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr)) + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr)) for_each_possible_cpu(cpu) delete_object_full((unsigned long)per_cpu_ptr(ptr, cpu)); @@ -1690,6 +1692,13 @@ static void kmemleak_do_cleanup(struct work_struct *work) mutex_lock(&scan_mutex); stop_scan_thread(); + /* + * Once the scan thread has stopped, it is safe to no longer track + * object freeing. Ordering of the scan thread stopping and the memory + * accesses below is guaranteed by the kthread_stop() function. + */ + kmemleak_free_enabled = 0; + if (cleanup) { rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) @@ -1717,6 +1726,8 @@ static void kmemleak_disable(void) /* check whether it is too early for a kernel thread */ if (atomic_read(&kmemleak_initialized)) schedule_work(&cleanup_work); + else + kmemleak_free_enabled = 0; pr_info("Kernel memory leak detector disabled\n"); } @@ -1782,8 +1793,10 @@ void __init kmemleak_init(void) if (atomic_read(&kmemleak_error)) { local_irq_restore(flags); return; - } else + } else { atomic_set(&kmemleak_enabled, 1); + kmemleak_free_enabled = 1; + } local_irq_restore(flags); /*