From patchwork Tue Jun 11 02:11:25 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17765 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DB4342397B for ; Tue, 11 Jun 2013 02:13:13 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id l109sf7788834yhq.5 for ; Mon, 10 Jun 2013 19:13:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=Qg4ZoF6lC99XsCBAzmkelez46OhJ6bP/MXwxbiZyWQw=; b=BXtYuO1vm4R9iwHAY1llMDSwWA6aR8ofyPyDfWMZHp+QWtAsmETFXQ9HZwFRp27Sin /EqnryNDkXsii7bWsMg4Xf1ilmK89qTcOnXFiIWzbUeSu97yVdIxtMa+BSj1Tg71JUrq hxriYiR/nTZ+SZxGmbb3GYOAZJLKxmIFgiQ1tZ/LfjnVDn3CvckTmM56xVj9icwXTyXZ f9oJB9UT5kLz8FM9g0YPb2PdAiHzlRD/sWn8/MNj89+8gTt0N3fCyWosLhAMR+EOdB7g 0nwgp2uB/zm5ifi0QBm0CO88B535hj7JA5Awh6WkuhrGyRqJv2iQgt8vPrPGiQoR2EHD yR7A== X-Received: by 10.224.86.200 with SMTP id t8mr8542024qal.0.1370916793516; Mon, 10 Jun 2013 19:13:13 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.118.66 with SMTP id kk2ls3108002qeb.48.gmail; Mon, 10 Jun 2013 19:13:13 -0700 (PDT) X-Received: by 10.52.69.40 with SMTP id b8mr6071130vdu.130.1370916793186; Mon, 10 Jun 2013 19:13:13 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id a9si6196148vck.84.2013.06.10.19.13.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:13 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id ia10so4918415vcb.14 for ; Mon, 10 Jun 2013 19:13:12 -0700 (PDT) X-Received: by 10.58.173.36 with SMTP id bh4mr7331019vec.9.1370916792906; Mon, 10 Jun 2013 19:13:12 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp90922vcb; Mon, 10 Jun 2013 19:13:12 -0700 (PDT) X-Received: by 10.66.119.196 with SMTP id kw4mr8005279pab.164.1370916791947; Mon, 10 Jun 2013 19:13:11 -0700 (PDT) Received: from mail-pb0-x22b.google.com (mail-pb0-x22b.google.com [2607:f8b0:400e:c01::22b]) by mx.google.com with ESMTPS id oo1si5889735pbb.329.2013.06.10.19.13.11 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:11 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::22b is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::22b; Received: by mail-pb0-f43.google.com with SMTP id md12so6884795pbc.16 for ; Mon, 10 Jun 2013 19:13:11 -0700 (PDT) X-Received: by 10.67.4.196 with SMTP id cg4mr16750675pad.117.1370916791439; Mon, 10 Jun 2013 19:13:11 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id nt2sm12427175pbc.17.2013.06.10.19.13.10 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 19:13:10 -0700 (PDT) From: John Stultz To: minchan.kim@lge.com Cc: Minchan Kim , John Stultz Subject: [PATCH 06/13] vrange: Add GFP_NO_VRANGE allocation flag Date: Mon, 10 Jun 2013 19:11:25 -0700 Message-Id: <1370916692-9576-7-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> References: <1370916692-9576-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQkjOpcYuc8g9NIaiteP4xqCkv5beWam7nFs0VbgOvt69qwqEsneX5tt5MH8aUd8Z6vjMmLB X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim In cloning the vroot tree during a fork, we have to allocate memory while hold the vroot lock. This is problematic, as the memory allocation can trigger reclaim, which might require grabbing a vroot lock in order to find purgable pages. Thus this patch introduces GFP_NO_VRANGE which will allow us to avoid having a allocation for vrange to trigger any volatile range purging. Signed-off-by: Minchan Kim --- include/linux/gfp.h | 7 +++++-- mm/vrange.c | 2 +- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0f615eb..fa52199 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -35,6 +35,7 @@ struct vm_area_struct; #define ___GFP_NO_KSWAPD 0x400000u #define ___GFP_OTHER_NODE 0x800000u #define ___GFP_WRITE 0x1000000u +#define ___GFP_NO_VRANGE 0x2000000u /* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* @@ -70,6 +71,7 @@ struct vm_area_struct; #define __GFP_HIGH ((__force gfp_t)___GFP_HIGH) /* Should access emergency pools? */ #define __GFP_IO ((__force gfp_t)___GFP_IO) /* Can start physical IO? */ #define __GFP_FS ((__force gfp_t)___GFP_FS) /* Can call down to low-level FS? */ +#define __GFP_NO_VRANGE ((__force gfp_t)___GFP_NO_VRANGE) /* Can't reclaim volatile pages */ #define __GFP_COLD ((__force gfp_t)___GFP_COLD) /* Cache-cold page required */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) /* Suppress page allocation failure warning */ #define __GFP_REPEAT ((__force gfp_t)___GFP_REPEAT) /* See above */ @@ -99,7 +101,7 @@ struct vm_area_struct; */ #define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK) -#define __GFP_BITS_SHIFT 25 /* Room for N __GFP_FOO bits */ +#define __GFP_BITS_SHIFT 26 /* Room for N __GFP_FOO bits */ #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /* This equals 0, but use constants in case they ever change */ @@ -134,7 +136,8 @@ struct vm_area_struct; /* Control page allocator reclaim behavior */ #define GFP_RECLAIM_MASK (__GFP_WAIT|__GFP_HIGH|__GFP_IO|__GFP_FS|\ __GFP_NOWARN|__GFP_REPEAT|__GFP_NOFAIL|\ - __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC) + __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC|\ + __GFP_NO_VRANGE) /* Control slab gfp mask during early boot */ #define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_WAIT|__GFP_IO|__GFP_FS)) diff --git a/mm/vrange.c b/mm/vrange.c index 0ab741e..914c109 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -204,7 +204,7 @@ int vrange_fork(struct mm_struct *new_mm, struct mm_struct *old_mm) range = vrange_entry(next); next = rb_next(next); - new_range = __vrange_alloc(GFP_KERNEL); + new_range = __vrange_alloc(GFP_KERNEL|__GFP_NO_VRANGE); if (!new_range) goto fail; __vrange_set(new_range, range->node.start,