From patchwork Wed Jun 12 04:22:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17808 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f199.google.com (mail-qc0-f199.google.com [209.85.216.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6B53425DFB for ; Wed, 12 Jun 2013 04:23:24 +0000 (UTC) Received: by mail-qc0-f199.google.com with SMTP id a1sf5471855qcx.6 for ; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=vYMScSZoMsqOiROK1eFeJ9zr+KtAG3RP8fmQtZwKvkU=; b=ZJX5FoeKeZ1ANnFIrOYLRHyNKAROe4hYP0+oCXM7ClhrCnj+DjNI3pp6kgQQD9aNBh uVL0YSbz2OmknldkPquFB8WDRqFvc3qjPDRGN2meOaG7miLXa5uLUDS19Io4YRXORG9w fO1vLZJDrJNeOnMx+ffFVXhxBjzLemdCDRxsBp1Bxi9oZsAYcYpHPpEals9I130L+CmN ZGBAv/IaufDIm4DWQQbpKuo7ecSZPNcEPi9O9rM3hwgo0mEL9+KXE/d/SHQWCWl4GMNR LU3u1TR+Apughl1tX4QHuBpCWbFyaACQy6+dwVN1aKlL1y4fjn2sRG7CiKE6BTCYt6Iy F+/w== X-Received: by 10.224.86.200 with SMTP id t8mr11189787qal.0.1371011004139; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.118.66 with SMTP id kk2ls3558726qeb.48.gmail; Tue, 11 Jun 2013 21:23:24 -0700 (PDT) X-Received: by 10.220.45.9 with SMTP id c9mr9068691vcf.65.1371011003970; Tue, 11 Jun 2013 21:23:23 -0700 (PDT) Received: from mail-vb0-x234.google.com (mail-vb0-x234.google.com [2607:f8b0:400c:c02::234]) by mx.google.com with ESMTPS id 3si5427459vcd.59.2013.06.11.21.23.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:23 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::234; Received: by mail-vb0-f52.google.com with SMTP id f12so4909564vbg.25 for ; Tue, 11 Jun 2013 21:23:23 -0700 (PDT) X-Received: by 10.58.22.74 with SMTP id b10mr9067519vef.47.1371011003804; Tue, 11 Jun 2013 21:23:23 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.191.99 with SMTP id gx3csp130072vec; Tue, 11 Jun 2013 21:23:23 -0700 (PDT) X-Received: by 10.68.204.6 with SMTP id ku6mr17556891pbc.80.1371011002696; Tue, 11 Jun 2013 21:23:22 -0700 (PDT) Received: from mail-pb0-x22b.google.com (mail-pb0-x22b.google.com [2607:f8b0:400e:c01::22b]) by mx.google.com with ESMTPS id ad8si8473110pbd.336.2013.06.11.21.23.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:22 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::22b is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::22b; Received: by mail-pb0-f43.google.com with SMTP id md12so8222049pbc.30 for ; Tue, 11 Jun 2013 21:23:22 -0700 (PDT) X-Received: by 10.68.160.226 with SMTP id xn2mr17743613pbb.174.1371011002233; Tue, 11 Jun 2013 21:23:22 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id xe9sm17439221pbc.21.2013.06.11.21.23.20 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 11 Jun 2013 21:23:21 -0700 (PDT) From: John Stultz To: LKML Cc: Minchan Kim , Andrew Morton , Android Kernel Team , Robert Love , Mel Gorman , Hugh Dickins , Dave Hansen , Rik van Riel , Dmitry Adamushko , Dave Chinner , Neil Brown , Andrea Righi , Andrea Arcangeli , "Aneesh Kumar K.V" , Mike Hommey , Taras Glek , Dhaval Giani , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , "linux-mm@kvack.org" , John Stultz Subject: [PATCH 5/8] vrange: Add new vrange(2) system call Date: Tue, 11 Jun 2013 21:22:48 -0700 Message-Id: <1371010971-15647-6-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> References: <1371010971-15647-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQl8gfupXvj6PZS27vIvlu/TVhbq+azb2wF0Rp6y5VK3+hMRbn+Kao5V9XlthbJAcAQSmORy X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::234 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds new system call sys_vrange. NAME vrange - Mark or unmark range of memory as volatile SYNOPSIS int vrange(unsigned_long start, size_t length, int mode, int *purged); DESCRIPTION Applications can use vrange(2) to advise the kernel how it should handle paging I/O in this VM area. The idea is to help the kernel discard pages of vrange instead of reclaiming when memory pressure happens. It means kernel doesn't discard any pages of vrange if there is no memory pressure. mode: VRANGE_VOLATILE hint to kernel so VM can discard in vrange pages when memory pressure happens. VRANGE_NONVOLATILE hint to kernel so VM doesn't discard vrange pages any more. If user try to access purged memory without VRANGE_NOVOLATILE call, he can encounter SIGBUS if the page was discarded by kernel. purged: Pointer to an integer which will return 1 if mode == VRANGE_NONVOLATILE and any page in the affected range was purged. If purged returns zero during a mode == VRANGE_NONVOLATILE call, it means all of the pages in the range are intact. RETURN VALUE On success vrange returns the number of bytes marked or unmarked. Similar to write(), it may return fewer bytes then specified if it ran into a problem. If an error is returned, no changes were made. ERRORS EINVAL This error can occur for the following reasons: * The value length is negative or not page size units. * addr is not page-aligned * mode not a valid value. ENOMEM Not enough memory EFAULT purged pointer is invalid Cc: Andrew Morton Cc: Android Kernel Team Cc: Robert Love Cc: Mel Gorman Cc: Hugh Dickins Cc: Dave Hansen Cc: Rik van Riel Cc: Dmitry Adamushko Cc: Dave Chinner Cc: Neil Brown Cc: Andrea Righi Cc: Andrea Arcangeli Cc: Aneesh Kumar K.V Cc: Mike Hommey Cc: Taras Glek Cc: Dhaval Giani Cc: Jan Kara Cc: KOSAKI Motohiro Cc: Michel Lespinasse Cc: Minchan Kim Cc: linux-mm@kvack.org Signed-off-by: Minchan Kim [jstultz: Major rework of interface and commit message] Signed-off-by: John Stultz --- arch/x86/syscalls/syscall_64.tbl | 1 + include/uapi/asm-generic/mman-common.h | 3 + mm/vrange.c | 147 +++++++++++++++++++++++++++++++++ 3 files changed, 151 insertions(+) diff --git a/arch/x86/syscalls/syscall_64.tbl b/arch/x86/syscalls/syscall_64.tbl index 38ae65d..dc332bd 100644 --- a/arch/x86/syscalls/syscall_64.tbl +++ b/arch/x86/syscalls/syscall_64.tbl @@ -320,6 +320,7 @@ 311 64 process_vm_writev sys_process_vm_writev 312 common kcmp sys_kcmp 313 common finit_module sys_finit_module +314 common vrange sys_vrange # # x32-specific system call numbers start at 512 to avoid cache impact diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 4164529..9be120b 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -66,4 +66,7 @@ #define MAP_HUGE_SHIFT 26 #define MAP_HUGE_MASK 0x3f +#define VRANGE_VOLATILE 0 /* unpin pages so VM can discard them */ +#define VRANGE_NONVOLATILE 1 /* pin pages so VM can't discard them */ + #endif /* __ASM_GENERIC_MMAN_COMMON_H */ diff --git a/mm/vrange.c b/mm/vrange.c index 5ca8853..f3c2465 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -5,6 +5,7 @@ #include #include #include +#include static struct kmem_cache *vrange_cachep; @@ -217,3 +218,149 @@ fail: vrange_root_cleanup(new); return -ENOMEM; } + +static ssize_t do_vrange(struct mm_struct *mm, unsigned long start_idx, + unsigned long end_idx, int mode, int *purged) +{ + struct vm_area_struct *vma; + unsigned long orig_start = start_idx; + ssize_t count = 0, ret = 0; + + down_read(&mm->mmap_sem); + + vma = find_vma(mm, start_idx); + for (;;) { + struct vrange_root *vroot; + unsigned long tmp, vstart_idx, vend_idx; + + if (!vma) + goto out; + + /* make sure start is at the front of the current vma*/ + if (start_idx < vma->vm_start) { + start_idx = vma->vm_start; + if (start_idx > end_idx) + goto out; + } + + /* bound tmp to closer of vm_end & end */ + tmp = vma->vm_end - 1; + if (end_idx < tmp) + tmp = end_idx; + + if (vma->vm_file && (vma->vm_flags & VM_SHARED)) { + /* Convert to file relative offsets */ + vroot = &vma->vm_file->f_mapping->vroot; + vstart_idx = vma->vm_pgoff + start_idx - vma->vm_start; + vend_idx = vma->vm_pgoff + tmp - vma->vm_start; + } else { + vroot = &mm->vroot; + vstart_idx = start_idx; + vend_idx = tmp; + } + + /* mark or unmark */ + if (mode == VRANGE_VOLATILE) + ret = vrange_add(vroot, vstart_idx, vend_idx); + else if (mode == VRANGE_NONVOLATILE) + ret = vrange_remove(vroot, vstart_idx, vend_idx, + purged); + + if (ret) + goto out; + + /* update count to distance covered so far*/ + count = tmp - orig_start; + + /* move start up to the end of the vma*/ + start_idx = vma->vm_end; + if (start_idx > end_idx) + goto out; + /* move to the next vma */ + vma = vma->vm_next; + } +out: + up_read(&mm->mmap_sem); + + /* report bytes successfully marked, even if we're exiting on error */ + if (count) + return count; + + return ret; +} + +/* + * The vrange(2) system call. + * + * Applications can use vrange() to advise the kernel how it should + * handle paging I/O in this VM area. The idea is to help the kernel + * discard pages of vrange instead of swapping out when memory pressure + * happens. The information provided is advisory only, and can be safely + * disregarded by the kernel if system has enough free memory. + * + * mode values: + * VRANGE_VOLATILE - hint to kernel so VM can discard vrange pages when + * memory pressure happens. + * VRANGE_NONVOLATILE - Removes any volatile hints previous specified in that + * range. + * + * purged ptr: + * Returns 1 if any page in the range being marked nonvolatile has been purged. + * + * Return values: + * On success vrange returns the number of bytes marked or unmarked. + * Similar to write(), it may return fewer bytes then specified if + * it ran into a problem. + * + * If an error is returned, no changes were made. + * + * Errors: + * -EINVAL - start len < 0, start is not page-aligned, start is greater + * than TASK_SIZE or "mode" is not a valid value. + * -ENOMEM - Short of free memory in system for successful system call. + * -EFAULT - Purged pointer is invalid. + * -ENOSUP - Feature not yet supported. + */ +SYSCALL_DEFINE4(vrange, unsigned long, start, + size_t, len, int, mode, int __user *, purged) +{ + unsigned long end; + struct mm_struct *mm = current->mm; + ssize_t ret = -EINVAL; + int p = 0; + + if (start & ~PAGE_MASK) + goto out; + + len &= PAGE_MASK; + if (!len) + goto out; + + end = start + len; + if (end < start) + goto out; + + if (start >= TASK_SIZE) + goto out; + + if (purged) { + /* Test pointer is valid before making any changes */ + if (put_user(p, purged)) + return -EFAULT; + } + + ret = do_vrange(mm, start, end - 1, mode, &p); + + if (purged) { + if (put_user(p, purged)) { + /* + * This would be bad, since we've modified volatilty + * and the change in purged state would be lost. + */ + BUG(); + } + } + +out: + return ret; +}