From patchwork Tue Jun 11 01:12:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 17758 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f200.google.com (mail-qc0-f200.google.com [209.85.216.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3F1F525DF9 for ; Tue, 11 Jun 2013 01:13:02 +0000 (UTC) Received: by mail-qc0-f200.google.com with SMTP id n1sf1040390qcx.11 for ; Mon, 10 Jun 2013 18:13:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=epC897XDf0KPxRKzrUAE4nWIcoVkwzcDih2+sF3dYmY=; b=XL2r/HP5PZ5srLOzTENoVcYxpFYgC76ygJf2ZscbevTa2pKZbvuIWQ+15NNOFDCIxj 6MOqSGpLnqcQYYvKHt17rCG28F2FcNDWcCYVQSBRF0/7UJDDojn9K/2D0rHlqZEuI779 2qH5fSafEjcvK//snNJMyhHMwgf9E+nyHAbHoyAcXB1P3m2EdjUku8bNerzRh0umA9RJ Z/M/4uZImFjvV9V3IVDxuDq1oBz2+8BfpaLY8VKCqtnK/kKRMOBJHeYQtq/kAtMaI9rP bDWpTljYIEk5DsXYNZbiGjJSznOXueYyFuNo/S+U5S5fcqJGmmH4stBWOm+54EThCo9N GbXQ== X-Received: by 10.224.86.200 with SMTP id t8mr8448972qal.0.1370913181985; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.64.196 with SMTP id q4ls3070302qes.72.gmail; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) X-Received: by 10.220.191.129 with SMTP id dm1mr7201797vcb.54.1370913181769; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) Received: from mail-vb0-x22c.google.com (mail-vb0-x22c.google.com [2607:f8b0:400c:c02::22c]) by mx.google.com with ESMTPS id zt2si6151334vdb.63.2013.06.10.18.13.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:13:01 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22c; Received: by mail-vb0-f44.google.com with SMTP id e15so1179848vbg.31 for ; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) X-Received: by 10.58.116.211 with SMTP id jy19mr7129586veb.7.1370913181642; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.221.10.206 with SMTP id pb14csp89610vcb; Mon, 10 Jun 2013 18:13:01 -0700 (PDT) X-Received: by 10.66.158.9 with SMTP id wq9mr16381378pab.189.1370913180671; Mon, 10 Jun 2013 18:13:00 -0700 (PDT) Received: from mail-pa0-x22d.google.com (mail-pa0-x22d.google.com [2607:f8b0:400e:c03::22d]) by mx.google.com with ESMTPS id ns8si5819298pbb.199.2013.06.10.18.13.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:13:00 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c03::22d is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c03::22d; Received: by mail-pa0-f45.google.com with SMTP id bi5so4947219pad.18 for ; Mon, 10 Jun 2013 18:13:00 -0700 (PDT) X-Received: by 10.68.224.1 with SMTP id qy1mr12485851pbc.85.1370913180107; Mon, 10 Jun 2013 18:13:00 -0700 (PDT) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qe10sm9802489pbb.2.2013.06.10.18.12.58 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 18:12:59 -0700 (PDT) From: John Stultz To: minchan@kernel.org Cc: dgiani@mozilla.com, John Stultz Subject: [PATCH 12/13] vrange: Enable purging of file backed volatile ranges Date: Mon, 10 Jun 2013 18:12:18 -0700 Message-Id: <1370913139-9320-13-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1370913139-9320-1-git-send-email-john.stultz@linaro.org> References: <1370913139-9320-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQmYgxv2MZqPrQRFqudVQ2tkl1XqXbr1a+/58/wzlBXvhF6O2xfMDnJvcaeSRcwKarBfgq9W X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Rework the victim range selection to also support file backed volatile ranges. Signed-off-by: John Stultz --- include/linux/vrange.h | 10 +++++ mm/vrange.c | 112 ++++++++++++++++++++++++++++++++++++------------- 2 files changed, 92 insertions(+), 30 deletions(-) diff --git a/include/linux/vrange.h b/include/linux/vrange.h index b6e8b99..bd36d67 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -3,6 +3,7 @@ #include #include +#include #define vrange_entry(ptr) \ container_of(ptr, struct vrange, node.rb) @@ -38,6 +39,15 @@ static inline struct mm_struct *vrange_get_owner_mm(struct vrange *vrange) return container_of(vrange->owner, struct mm_struct, vroot); } +static inline +struct address_space *vrange_get_owner_mapping(struct vrange *vrange) +{ + if (vrange_type(vrange) != VRANGE_FILE) + return NULL; + return container_of(vrange->owner, struct address_space, vroot); +} + + void vrange_init(void); extern int vrange_clear(struct vrange_root *vroot, unsigned long start, unsigned long end); diff --git a/mm/vrange.c b/mm/vrange.c index e9ea728..84e9b91 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -757,8 +757,9 @@ unsigned int discard_vma_pages(struct zone *zone, struct mm_struct *mm, return ret; } -unsigned int discard_vrange(struct zone *zone, struct vrange *vrange, - int nr_to_discard) +static unsigned int discard_anon_vrange(struct zone *zone, + struct vrange *vrange, + int nr_to_discard) { struct mm_struct *mm; unsigned long start = vrange->node.start; @@ -799,46 +800,91 @@ out: return nr_discarded; } +static unsigned int discard_file_vrange(struct zone *zone, + struct vrange *vrange, + int nr_to_discard) +{ + struct address_space *mapping; + unsigned long start = vrange->node.start; + unsigned long end = vrange->node.last; + unsigned long count = ((end-start) >> PAGE_CACHE_SHIFT); + + mapping = vrange_get_owner_mapping(vrange); + + truncate_inode_pages_range(mapping, start, end); + vrange->purged = true; + + return count; +} + +unsigned int discard_vrange(struct zone *zone, struct vrange *vrange, + int nr_to_discard) +{ + if (vrange_type(vrange) == VRANGE_MM) + return discard_anon_vrange(zone, vrange, nr_to_discard); + return discard_file_vrange(zone, vrange, nr_to_discard); +} + + +/* Take a vrange refcount and depending on the type + * the vrange->owner's mm refcount or inode refcount + */ +static int hold_victim_vrange(struct vrange *vrange) +{ + if (vrange_type(vrange) == VRANGE_MM) { + struct mm_struct *mm = vrange_get_owner_mm(vrange); + + + if (atomic_read(&mm->mm_users) == 0) + return -1; + + + if (!atomic_inc_not_zero(&vrange->refcount)) + return -1; + /* + * we need to access mmap_sem further routine so + * need to get a refcount of mm. + * NOTE: We guarantee mm_count isn't zero in here because + * if we found vrange from LRU list, it means we are + * before exit_vrange or remove_vrange. + */ + atomic_inc(&mm->mm_count); + } else { + struct address_space *mapping; + mapping = vrange_get_owner_mapping(vrange); + + if (!atomic_inc_not_zero(&vrange->refcount)) + return -1; + __iget(mapping->host); + } + + return 0; +} + + + /* - * Get next victim vrange from LRU and hold a vrange refcount - * and vrange->mm's refcount. + * Get next victim vrange from LRU and hold needed refcounts. */ struct vrange *get_victim_vrange(void) { - struct mm_struct *mm; struct vrange *vrange = NULL; struct list_head *cur, *tmp; spin_lock(&lru_lock); list_for_each_prev_safe(cur, tmp, &lru_vrange) { vrange = list_entry(cur, struct vrange, lru); - mm = vrange_get_owner_mm(vrange); - /* the process is exiting so pass it */ - if (atomic_read(&mm->mm_users) == 0) { - list_del_init(&vrange->lru); - vrange = NULL; - continue; - } - /* vrange is freeing so continue to loop */ - if (!atomic_inc_not_zero(&vrange->refcount)) { + if (hold_victim_vrange(vrange)) { list_del_init(&vrange->lru); vrange = NULL; continue; } - /* - * we need to access mmap_sem further routine so - * need to get a refcount of mm. - * NOTE: We guarantee mm_count isn't zero in here because - * if we found vrange from LRU list, it means we are - * before exit_vrange or remove_vrange. - */ - atomic_inc(&mm->mm_count); - /* Isolate vrange */ list_del_init(&vrange->lru); break; + } spin_unlock(&lru_lock); @@ -847,9 +893,18 @@ struct vrange *get_victim_vrange(void) void put_victim_range(struct vrange *vrange) { - struct mm_struct *mm = vrange_get_owner_mm(vrange); put_vrange(vrange); - mmdrop(mm); + + if (vrange_type(vrange) == VRANGE_MM) { + struct mm_struct *mm = vrange_get_owner_mm(vrange); + + mmdrop(mm); + } else { + struct address_space *mapping; + + mapping = vrange_get_owner_mapping(vrange); + iput(mapping->host); + } } unsigned int discard_vrange_pages(struct zone *zone, int nr_to_discard) @@ -858,11 +913,8 @@ unsigned int discard_vrange_pages(struct zone *zone, int nr_to_discard) unsigned int nr_discarded = 0; start_vrange = vrange = get_victim_vrange(); - if (start_vrange) { - struct mm_struct *mm = vrange_get_owner_mm(start_vrange); - atomic_inc(&start_vrange->refcount); - atomic_inc(&mm->mm_count); - } + if (start_vrange) + hold_victim_vrange(start_vrange); while (vrange) { nr_discarded += discard_vrange(zone, vrange, nr_to_discard);