From patchwork Tue Mar 5 06:57:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 15225 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2F06D23E24 for ; Tue, 5 Mar 2013 06:58:44 +0000 (UTC) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by fiordland.canonical.com (Postfix) with ESMTP id B8DECA18CCD for ; Tue, 5 Mar 2013 06:58:43 +0000 (UTC) Received: by mail-vc0-f170.google.com with SMTP id p16so3957171vcq.1 for ; Mon, 04 Mar 2013 22:58:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-auditid:from:to:date:message-id:x-mailer:in-reply-to :references:x-brightmail-tracker:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=WQYWjh3xMOFChdjAB/DaCuR876HTk3RByPG+baRG3SM=; b=CZCuMKdZktRs5KI/n87WaDLbycLxz0xMpz0x6jSOZskDXjehHGVP+tIcVkOjaGzB97 nl6TwnaA8khdlfNzWfkxczIL+GHpPyeRYROpxrB6oTzNuzKh72tycDrdB+B00SwI1oU6 /YrEW1xaLZw1G6154RcmdvmS6OYZlvFW7NMkG2orBpas9ctwDvgix596HFqVnU9hOeRi HZymPd6xywQUIPUJXcnUr10U8wIJV2C4M+nXr4wvcfY5efM5aCS0m/buM3QU9fcezXsZ ACU9eWbqLr8Ddlb8+ggYEyd0A5vU+Rn/OiqDmVv5J6HXVps7c329rt24b2WMlPfZm1Mq d7TA== X-Received: by 10.52.18.148 with SMTP id w20mr7990370vdd.8.1362466723153; Mon, 04 Mar 2013 22:58:43 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp96897veb; Mon, 4 Mar 2013 22:58:42 -0800 (PST) X-Received: by 10.14.173.196 with SMTP id v44mr67022772eel.29.1362466722015; Mon, 04 Mar 2013 22:58:42 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id 3si38692139eeb.258.2013.03.04.22.58.38; Mon, 04 Mar 2013 22:58:42 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpc-0004CM-VD; Tue, 05 Mar 2013 06:58:37 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1UClpa-0004Bo-VD for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 06:58:35 +0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MJ600DW5E13EZP0@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 05 Mar 2013 15:58:33 +0900 (KST) X-AuditID: cbfee61b-b7fb06d000000f28-c6-513597997cb8 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id 11.E3.03880.99795315; Tue, 05 Mar 2013 15:58:33 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MJ600A49E14LIB0@mmp2.samsung.com>; Tue, 05 Mar 2013 15:58:33 +0900 (KST) From: Marek Szyprowski To: linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Date: Tue, 05 Mar 2013 07:57:56 +0100 Message-id: <1362466679-17111-3-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> References: <1362466679-17111-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrDLMWRmVeSWpSXmKPExsVy+t9jQd2Z000DDe6vVrWYs34Nm8XfScfY LTbOWM9qcbbpDbvFlysPmSwu75rDZnFvzX9Wi7VH7rJb/D7YyW6x4HgLq8Wyr+/ZHbg9fv+a xOjRu/crq8emVZ1sHps+TWL3ODHjN4vH7X+PmT3W/XnF5NG3ZRWjx+dNcgGcUVw2Kak5mWWp Rfp2CVwZu2YvYi/oc6z4uGY1cwPjc+MuRk4OCQETidZvd1kgbDGJC/fWs3UxcnEICUxnlFi1 4TorhNPOJDH/TwsjSBWbgKFE19suNhBbRCBM4k/jNrAiZoHjTBJ/Lq4FSwgL2Ems7lnLBGKz CKhK7Pn8hxXE5hXwkHi/fCtzFyMH0DoFiTmTbEDCnAKeEk+WfwS7QgioZN62z0wTGHkXMDKs YhRNLUguKE5KzzXSK07MLS7NS9dLzs/dxAgO02fSOxhXNVgcYhTgYFTi4WU4ahIoxJpYVlyZ e4hRgoNZSYRXrM40UIg3JbGyKrUoP76oNCe1+BCjNAeLkjgv46knAUIC6YklqdmpqQWpRTBZ Jg5OqQZG50M2ym06/2dUx0teu7fdcM5uifyVxrP92+4yKpZ7TYno0dKa8KPMyLHqfPP1etOn RoaxX+4+MN6p3H1ae1ro7x3TdjLxqEb8Zf7k80XuQ9TRFZ0r/rxecSFo7SbPNp41zav/HK53 FLMT2qIg+vj+YUv9h4vuZa48dvWAip9kpmt3RquBS7GlEktxRqKhFnNRcSIA2DCQAE8CAAA= Cc: Arnd Bergmann , Bartlomiej Zolnierkiewicz , Mel Gorman , Michal Nazarewicz , Minchan Kim , Kyungmin Park , Andrew Morton Subject: [Linaro-mm-sig] [RFC/PATCH 2/5] mm: get_user_pages: use static inline X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQk/GlZPyxU0/rB6emDlrBeGT392vODo8trJYArateHaUlTBEZ/mTg0MoVDP2ZHMe4WPp6dh __get_user_pages() is already exported function, so get_user_pages() can be easily inlined to the caller functions. Signed-off-by: Marek Szyprowski Signed-off-by: Kyungmin Park --- include/linux/mm.h | 74 +++++++++++++++++++++++++++++++++++++++++++++++++--- mm/memory.c | 69 ------------------------------------------------ 2 files changed, 70 insertions(+), 73 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7acc9dc..9806e54 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1019,10 +1019,7 @@ long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int foll_flags, struct page **pages, struct vm_area_struct **vmas, int *nonblocking); -long get_user_pages(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - struct vm_area_struct **vmas); + int get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages); struct kvec; @@ -1642,6 +1639,75 @@ typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); +/* + * get_user_pages() - pin user pages in memory + * @tsk: the task_struct to use for page fault accounting, or + * NULL if faults are not to be recorded. + * @mm: mm_struct of target mm + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @write: whether pages will be written to by the caller + * @force: whether to force write access even if user mapping is + * readonly. This will result in the page being COWed even + * in MAP_SHARED mappings. You do not want this. + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. Or NULL, if caller + * only intends to ensure the pages are faulted in. + * @vmas: array of pointers to vmas corresponding to each page. + * Or NULL if the caller does not require them. + * + * Returns number of pages pinned. This may be fewer than the number + * requested. If nr_pages is 0 or negative, returns 0. If no pages + * were pinned, returns -errno. Each page returned must be released + * with a put_page() call when it is finished with. vmas will only + * remain valid while mmap_sem is held. + * + * Must be called with mmap_sem held for read or write. + * + * get_user_pages walks a process's page tables and takes a reference to + * each struct page that each user address corresponds to at a given + * instant. That is, it takes the page that would be accessed if a user + * thread accesses the given user virtual address at that instant. + * + * This does not guarantee that the page exists in the user mappings when + * get_user_pages returns, and there may even be a completely different + * page there in some cases (eg. if mmapped pagecache has been invalidated + * and subsequently re faulted). However it does guarantee that the page + * won't be freed completely. And mostly callers simply care that the page + * contains data that was valid *at some point in time*. Typically, an IO + * or similar operation cannot guarantee anything stronger anyway because + * locks can't be held over the syscall boundary. + * + * If write=0, the page must not be written to. If the page is written to, + * set_page_dirty (or set_page_dirty_lock, as appropriate) must be called + * after the page is finished with, and before put_page is called. + * + * get_user_pages is typically used for fewer-copy IO operations, to get a + * handle on the memory by some means other than accesses via the user virtual + * addresses. The pages may be submitted for DMA to devices or accessed via + * their kernel linear mapping (via the kmap APIs). Care should be taken to + * use the correct cache flushing APIs. + * + * See also get_user_pages_fast, for performance critical applications. + */ +static inline long get_user_pages(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, int write, + int force, struct page **pages, + struct vm_area_struct **vmas) +{ + int flags = FOLL_TOUCH; + + if (pages) + flags |= FOLL_GET; + if (write) + flags |= FOLL_WRITE; + if (force) + flags |= FOLL_FORCE; + + return __get_user_pages(tsk, mm, start, nr_pages, flags, pages, vmas, + NULL); +} + #ifdef CONFIG_PROC_FS void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); #else diff --git a/mm/memory.c b/mm/memory.c index 494526a..42dfd8e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1961,75 +1961,6 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, return 0; } -/* - * get_user_pages() - pin user pages in memory - * @tsk: the task_struct to use for page fault accounting, or - * NULL if faults are not to be recorded. - * @mm: mm_struct of target mm - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @write: whether pages will be written to by the caller - * @force: whether to force write access even if user mapping is - * readonly. This will result in the page being COWed even - * in MAP_SHARED mappings. You do not want this. - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. Or NULL, if caller - * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. - * - * Returns number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. Each page returned must be released - * with a put_page() call when it is finished with. vmas will only - * remain valid while mmap_sem is held. - * - * Must be called with mmap_sem held for read or write. - * - * get_user_pages walks a process's page tables and takes a reference to - * each struct page that each user address corresponds to at a given - * instant. That is, it takes the page that would be accessed if a user - * thread accesses the given user virtual address at that instant. - * - * This does not guarantee that the page exists in the user mappings when - * get_user_pages returns, and there may even be a completely different - * page there in some cases (eg. if mmapped pagecache has been invalidated - * and subsequently re faulted). However it does guarantee that the page - * won't be freed completely. And mostly callers simply care that the page - * contains data that was valid *at some point in time*. Typically, an IO - * or similar operation cannot guarantee anything stronger anyway because - * locks can't be held over the syscall boundary. - * - * If write=0, the page must not be written to. If the page is written to, - * set_page_dirty (or set_page_dirty_lock, as appropriate) must be called - * after the page is finished with, and before put_page is called. - * - * get_user_pages is typically used for fewer-copy IO operations, to get a - * handle on the memory by some means other than accesses via the user virtual - * addresses. The pages may be submitted for DMA to devices or accessed via - * their kernel linear mapping (via the kmap APIs). Care should be taken to - * use the correct cache flushing APIs. - * - * See also get_user_pages_fast, for performance critical applications. - */ -long get_user_pages(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, int write, - int force, struct page **pages, struct vm_area_struct **vmas) -{ - int flags = FOLL_TOUCH; - - if (pages) - flags |= FOLL_GET; - if (write) - flags |= FOLL_WRITE; - if (force) - flags |= FOLL_FORCE; - - return __get_user_pages(tsk, mm, start, nr_pages, flags, pages, vmas, - NULL); -} -EXPORT_SYMBOL(get_user_pages); - /** * get_dump_page() - pin user page in memory while writing it to core dump * @addr: user address