From patchwork Fri Mar 21 21:17:32 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 26881 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f197.google.com (mail-ve0-f197.google.com [209.85.128.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5A81E203AB for ; Fri, 21 Mar 2014 21:17:57 +0000 (UTC) Received: by mail-ve0-f197.google.com with SMTP id pa12sf7028969veb.8 for ; Fri, 21 Mar 2014 14:17:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=giT/M+LXCUyX6XtzVZ0xecMzR6ntrp+4YWMtgo1y3fc=; b=etfOdTzovmrs+4BBgsTfoqa4axK+o5j9IAxo46xN2zFsqmaPHeH+rpQJQr5IHjOg9Z CAEfeOuDvIZUsAKBuh/b/1OI/n6JF2knchuhCF3xnd4h4d7j6hHUG/oF+ygAZGbWapNU AedtU58SoE7o402WXzZFY4iR8Yw2nAqMcY0K8pTRBKkXEcw1GamTFblJxmcR3L2X5yDp H9cZxlTVMA3Dp8aTp/QH1afQ6/A+gKQvSpU68q+MmPfeiV1oYkHMYrSDL5gmGV1DX2pm q41mFfDBGECfkMLKrq99qEG2QWjsw+M4bc7PCrxhnWQtq1J+Si1wHu/wq/QIB+Atuxkb MhpQ== X-Gm-Message-State: ALoCoQmPDdeNHr45fo0awAlyMFx+3NOfcMMbrC4KN+AYBeVvlonf6t25lNlpDArhRO47BEV7vqbR X-Received: by 10.58.201.8 with SMTP id jw8mr11992391vec.28.1395436677120; Fri, 21 Mar 2014 14:17:57 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.26.195 with SMTP id 61ls802449qgv.19.gmail; Fri, 21 Mar 2014 14:17:57 -0700 (PDT) X-Received: by 10.221.27.8 with SMTP id ro8mr9508617vcb.30.1395436677038; Fri, 21 Mar 2014 14:17:57 -0700 (PDT) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id vd8si1449270vdc.124.2014.03.21.14.17.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:17:57 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id lh14so3257070vcb.20 for ; Fri, 21 Mar 2014 14:17:57 -0700 (PDT) X-Received: by 10.52.128.231 with SMTP id nr7mr12827930vdb.17.1395436676939; Fri, 21 Mar 2014 14:17:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp59689vck; Fri, 21 Mar 2014 14:17:56 -0700 (PDT) X-Received: by 10.68.90.132 with SMTP id bw4mr34183725pbb.136.1395436675729; Fri, 21 Mar 2014 14:17:55 -0700 (PDT) Received: from mail-pa0-f48.google.com (mail-pa0-f48.google.com [209.85.220.48]) by mx.google.com with ESMTPS id ph6si4451787pab.62.2014.03.21.14.17.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:17:55 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.48 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.48; Received: by mail-pa0-f48.google.com with SMTP id hz1so2908349pad.35 for ; Fri, 21 Mar 2014 14:17:55 -0700 (PDT) X-Received: by 10.69.31.43 with SMTP id kj11mr54983297pbd.67.1395436674923; Fri, 21 Mar 2014 14:17:54 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id os1sm31409893pac.20.2014.03.21.14.17.53 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 14:17:54 -0700 (PDT) From: John Stultz To: LKML Cc: John Stultz , Andrew Morton , Android Kernel Team , Johannes Weiner , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Neil Brown , Andrea Arcangeli , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , "linux-mm@kvack.org" Subject: [PATCH 2/5] vrange: Add purged page detection on setting memory non-volatile Date: Fri, 21 Mar 2014 14:17:32 -0700 Message-Id: <1395436655-21670-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1395436655-21670-1-git-send-email-john.stultz@linaro.org> References: <1395436655-21670-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Users of volatile ranges will need to know if memory was discarded. This patch adds the purged state tracking required to inform userland when it marks memory as non-volatile that some memory in that range was purged and needs to be regenerated. This simplified implementation which uses some of the logic from Minchan's earlier efforts, so credit to Minchan for his work. Cc: Andrew Morton Cc: Android Kernel Team Cc: Johannes Weiner Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Neil Brown Cc: Andrea Arcangeli Cc: Mike Hommey Cc: Taras Glek Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: John Stultz Acked-by: Jan Kara --- include/linux/swap.h | 15 ++++++++-- include/linux/swapops.h | 10 +++++++ include/linux/vrange.h | 3 ++ mm/vrange.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 101 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 46ba0c6..18c12f9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -70,8 +70,19 @@ static inline int current_is_kswapd(void) #define SWP_HWPOISON_NUM 0 #endif -#define MAX_SWAPFILES \ - ((1 << MAX_SWAPFILES_SHIFT) - SWP_MIGRATION_NUM - SWP_HWPOISON_NUM) + +/* + * Purged volatile range pages + */ +#define SWP_VRANGE_PURGED_NUM 1 +#define SWP_VRANGE_PURGED (MAX_SWAPFILES + SWP_HWPOISON_NUM + SWP_MIGRATION_NUM) + + +#define MAX_SWAPFILES ((1 << MAX_SWAPFILES_SHIFT) \ + - SWP_MIGRATION_NUM \ + - SWP_HWPOISON_NUM \ + - SWP_VRANGE_PURGED_NUM \ + ) /* * Magic header for a swap area. The first part of the union is diff --git a/include/linux/swapops.h b/include/linux/swapops.h index c0f7526..84f43d9 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -161,6 +161,16 @@ static inline int is_write_migration_entry(swp_entry_t entry) #endif +static inline swp_entry_t make_vpurged_entry(void) +{ + return swp_entry(SWP_VRANGE_PURGED, 0); +} + +static inline int is_vpurged_entry(swp_entry_t entry) +{ + return swp_type(entry) == SWP_VRANGE_PURGED; +} + #ifdef CONFIG_MEMORY_FAILURE /* * Support for hardware poisoned pages diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 6e5331e..986fa85 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -1,6 +1,9 @@ #ifndef _LINUX_VRANGE_H #define _LINUX_VRANGE_H +#include +#include + #define VRANGE_NONVOLATILE 0 #define VRANGE_VOLATILE 1 #define VRANGE_VALID_FLAGS (0) /* Don't yet support any flags */ diff --git a/mm/vrange.c b/mm/vrange.c index 2f8e2ce..1ff3cbd 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -8,6 +8,76 @@ #include #include "internal.h" +struct vrange_walker { + struct vm_area_struct *vma; + int page_was_purged; +}; + + +/** + * vrange_check_purged_pte - Checks ptes for purged pages + * + * Iterates over the ptes in the pmd checking if they have + * purged swap entries. + * + * Sets the vrange_walker.pages_purged to 1 if any were purged. + */ +static int vrange_check_purged_pte(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + struct vrange_walker *vw = walk->private; + pte_t *pte; + spinlock_t *ptl; + + if (pmd_trans_huge(*pmd)) + return 0; + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + for (; addr != end; pte++, addr += PAGE_SIZE) { + if (!pte_present(*pte)) { + swp_entry_t vrange_entry = pte_to_swp_entry(*pte); + + if (unlikely(is_vpurged_entry(vrange_entry))) { + vw->page_was_purged = 1; + break; + } + } + } + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + + return 0; +} + + +/** + * vrange_check_purged - Sets up a mm_walk to check for purged pages + * + * Sets up and calls wa_page_range() to check for purge pages. + * + * Returns 1 if pages in the range were purged, 0 otherwise. + */ +static int vrange_check_purged(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + struct vrange_walker vw; + struct mm_walk vrange_walk = { + .pmd_entry = vrange_check_purged_pte, + .mm = vma->vm_mm, + .private = &vw, + }; + vw.page_was_purged = 0; + vw.vma = vma; + + walk_page_range(start, end, &vrange_walk); + + return vw.page_was_purged; + +} /** * do_vrange - Marks or clears VMAs in the range (start-end) as VM_VOLATILE @@ -106,6 +176,11 @@ success: vma = prev->vm_next; } out: + if (count && (mode == VRANGE_NONVOLATILE)) + *purged = vrange_check_purged(mm, vma, + orig_start, + orig_start+count); + up_read(&mm->mmap_sem); /* report bytes successfully marked, even if we're exiting on error */