From patchwork Thu Mar 13 22:44:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 26224 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ig0-f197.google.com (mail-ig0-f197.google.com [209.85.213.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 27C38203AC for ; Thu, 13 Mar 2014 22:44:35 +0000 (UTC) Received: by mail-ig0-f197.google.com with SMTP id hl1sf5946913igb.0 for ; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=OMLCT+199iEE2ccLIxN9qSkbtjAOOrQa0wQKQliDMlo=; b=XdDD26vhhWGTBtCMSbFgEbCdrO+Z5Q1SFMmwygoQI5MT+NbOrkSYiLmdpH42ES8ZrX 2PDhZfzVVOfyrXMLNyZAigUgbp89pMS0676/LBD4Gi1KZcX8oqJSusIANade6S8vkR6I NnqOBjvlvTcYo+zLrmJkrkZjqOeVbmrxCgh69haefOJmVb9EgADd1doS8bIA6+Vhcplh ag2Q5gzZKp9vxF8hr14qk8Zz+Ldsfp7JjKYBiZlO2prUq1QLaGXs093EYSlqZqMTPjlG gJspgcGCl1yRzF+HfI5sdWuzHRyuHr+bBNWHNeO+057u5L48zTOO762OGRNrW5dA+Tf6 z2SQ== X-Gm-Message-State: ALoCoQnP0tkuBv+FpAgN9YCGKkn7YbOFqG39Lsy7sZ5KG3gbYvAn7QjUXa9I00R8AT2ME2s5bwz1 X-Received: by 10.182.28.36 with SMTP id y4mr1799788obg.46.1394750674585; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.26.195 with SMTP id 61ls457916qgv.19.gmail; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) X-Received: by 10.59.9.38 with SMTP id dp6mr3567322ved.24.1394750674502; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) by mx.google.com with ESMTPS id ue9si70133veb.124.2014.03.13.15.44.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:44:34 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.178; Received: by mail-vc0-f178.google.com with SMTP id im17so1872545vcb.23 for ; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) X-Received: by 10.58.91.101 with SMTP id cd5mr3467309veb.5.1394750674408; Thu, 13 Mar 2014 15:44:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.78.9 with SMTP id i9csp10026vck; Thu, 13 Mar 2014 15:44:33 -0700 (PDT) X-Received: by 10.66.66.202 with SMTP id h10mr5458368pat.70.1394750673083; Thu, 13 Mar 2014 15:44:33 -0700 (PDT) Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com [209.85.220.44]) by mx.google.com with ESMTPS id xj10si2373803pab.163.2014.03.13.15.44.31 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:44:33 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.44 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.44; Received: by mail-pa0-f44.google.com with SMTP id bj1so1762062pad.17 for ; Thu, 13 Mar 2014 15:44:31 -0700 (PDT) X-Received: by 10.68.178.66 with SMTP id cw2mr5278042pbc.89.1394750671576; Thu, 13 Mar 2014 15:44:31 -0700 (PDT) Received: from buildbox.hsd1.or.comcast.net (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id bz4sm10365676pbb.12.2014.03.13.15.44.30 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 13 Mar 2014 15:44:30 -0700 (PDT) From: John Stultz To: dave@sr71.net Cc: John Stultz Subject: [PATCH 1/3] vrange: Add vrange syscall and handle splitting/merging and marking vmas Date: Thu, 13 Mar 2014 15:44:26 -0700 Message-Id: <1394750668-28654-1-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.178 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch introduces the vrange() syscall, which allows for specifying ranges of memory as volatile, and able to be discarded by the system. This initial patch simply adds the syscall, and the vma handling, splitting and merging the vmas as needed, and marking them with VM_VOLATILE. No purging or discarding of volatile ranges is done at this point. Example man page: NAME vrange - Mark or unmark range of memory as volatile SYNOPSIS int vrange(unsigned_long start, size_t length, int mode, int *purged); DESCRIPTION Applications can use vrange(2) to advise the kernel how it should handle paging I/O in this VM area. The idea is to help the kernel discard pages of vrange instead of reclaiming when memory pressure happens. It means kernel doesn't discard any pages of vrange if there is no memory pressure. mode: VRANGE_VOLATILE hint to kernel so VM can discard in vrange pages when memory pressure happens. VRANGE_NONVOLATILE hint to kernel so VM doesn't discard vrange pages any more. If user try to access purged memory without VRANGE_NONVOLATILE call, he can encounter SIGBUS if the page was discarded by kernel. purged: Pointer to an integer which will return 1 if mode == VRANGE_NONVOLATILE and any page in the affected range was purged. If purged returns zero during a mode == VRANGE_NONVOLATILE call, it means all of the pages in the range are intact. RETURN VALUE On success vrange returns the number of bytes marked or unmarked. Similar to write(), it may return fewer bytes then specified if it ran into a problem. If an error is returned, no changes were made. ERRORS EINVAL This error can occur for the following reasons: * The value length is negative or not page size units. * addr is not page-aligned * mode not a valid value. ENOMEM Not enough memory EFAULT purged pointer is invalid This a simplified implementation which reuses some of the logic from Minchan's earlier efforts. So credit to Minchan for his work. Signed-off-by: John Stultz --- arch/x86/syscalls/syscall_64.tbl | 1 + include/linux/mm.h | 1 + include/linux/vrange.h | 7 ++ mm/Makefile | 2 +- mm/vrange.c | 152 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 162 insertions(+), 1 deletion(-) create mode 100644 include/linux/vrange.h create mode 100644 mm/vrange.c diff --git a/arch/x86/syscalls/syscall_64.tbl b/arch/x86/syscalls/syscall_64.tbl index a12bddc..7ae3940 100644 --- a/arch/x86/syscalls/syscall_64.tbl +++ b/arch/x86/syscalls/syscall_64.tbl @@ -322,6 +322,7 @@ 313 common finit_module sys_finit_module 314 common sched_setattr sys_sched_setattr 315 common sched_getattr sys_sched_getattr +316 common vrange sys_vrange # # x32-specific system call numbers start at 512 to avoid cache impact diff --git a/include/linux/mm.h b/include/linux/mm.h index c1b7414..a1f11da 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -117,6 +117,7 @@ extern unsigned int kobjsize(const void *objp); #define VM_IO 0x00004000 /* Memory mapped I/O or similar */ /* Used by sys_madvise() */ +#define VM_VOLATILE 0x00001000 /* VMA is volatile */ #define VM_SEQ_READ 0x00008000 /* App will access data sequentially */ #define VM_RAND_READ 0x00010000 /* App will not benefit from clustered reads */ diff --git a/include/linux/vrange.h b/include/linux/vrange.h new file mode 100644 index 0000000..652396b --- /dev/null +++ b/include/linux/vrange.h @@ -0,0 +1,7 @@ +#ifndef _LINUX_VRANGE_H +#define _LINUX_VRANGE_H + +#define VRANGE_NONVOLATILE 0 +#define VRANGE_VOLATILE 1 + +#endif /* _LINUX_VRANGE_H */ diff --git a/mm/Makefile b/mm/Makefile index 310c90a..20229e2 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -16,7 +16,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o mmu_context.o percpu.o slab_common.o \ - compaction.o balloon_compaction.o \ + compaction.o balloon_compaction.o vrange.o \ interval_tree.o list_lru.o $(mmu-y) obj-y += init-mm.o diff --git a/mm/vrange.c b/mm/vrange.c new file mode 100644 index 0000000..d9116b1 --- /dev/null +++ b/mm/vrange.c @@ -0,0 +1,152 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +static ssize_t do_vrange(struct mm_struct *mm, unsigned long start, + unsigned long end, int mode, int *purged) +{ + struct vm_area_struct *vma, *prev; + unsigned long orig_start = start; + ssize_t count = 0, ret = 0; + int lpurged = 0; + + down_read(&mm->mmap_sem); + + vma = find_vma_prev(mm, start, &prev); + if (vma && start > vma->vm_start) + prev = vma; + + for (;;) { + unsigned long new_flags; + pgoff_t pgoff; + unsigned long tmp; + + if (!vma) + goto out; + + if (vma->vm_flags & (VM_SPECIAL|VM_LOCKED|VM_MIXEDMAP| + VM_HUGETLB)) + goto out; + + /* We don't support volatility on files for now */ + if (vma->vm_file) { + ret = -EINVAL; + goto out; + } + + new_flags = vma->vm_flags; + + if (start < vma->vm_start) { + start = vma->vm_start; + if (start >= end) + goto out; + } + tmp = vma->vm_end; + if (end < tmp) + tmp = end; + + switch(mode) { + case VRANGE_VOLATILE: + new_flags |= VM_VOLATILE; + break; + case VRANGE_NONVOLATILE: + new_flags &= ~VM_VOLATILE; + } + + pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); + prev = vma_merge(mm, prev, start, tmp, new_flags, + vma->anon_vma, vma->vm_file, pgoff, + vma_policy(vma)); + if (prev) { + goto success; + } + + + if (start != vma->vm_start) { + ret = split_vma(mm, vma, start, 1); + if (ret) + goto out; + } + + if (tmp != vma->vm_end) { + ret = split_vma(mm, vma, tmp, 0); + if (ret) + goto out; + } + + prev = vma; +success: + vma->vm_flags = new_flags; + *purged = lpurged; + + /* update count to distance covered so far*/ + count = tmp - orig_start; + + if (prev && start < prev->vm_end) + start = prev->vm_end; + if (start >= end) + goto out; + if (prev) + vma = prev->vm_next; + else /* madvise_remove dropped mmap_sem */ + vma = find_vma(mm, start); + } +out: + up_read(&mm->mmap_sem); + + /* report bytes successfully marked, even if we're exiting on error */ + if (count) + return count; + + return ret; +} + +SYSCALL_DEFINE4(vrange, unsigned long, start, + size_t, len, int, mode, int __user *, purged) +{ + unsigned long end; + struct mm_struct *mm = current->mm; + ssize_t ret = -EINVAL; + int p = 0; + + if (start & ~PAGE_MASK) + goto out; + + len &= PAGE_MASK; + if (!len) + goto out; + + end = start + len; + if (end < start) + goto out; + + if (start >= TASK_SIZE) + goto out; + + if (purged) { + /* Test pointer is valid before making any changes */ + if (put_user(p, purged)) + return -EFAULT; + } + + ret = do_vrange(mm, start, end, mode, &p); + + if (purged) { + if (put_user(p, purged)) { + /* + * This would be bad, since we've modified volatilty + * and the change in purged state would be lost. + */ + WARN_ONCE(1, "vrange: purge state possibly lost\n"); + } + } + +out: + return ret; +}