From patchwork Mon Mar 17 18:51:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 26403 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f70.google.com (mail-pb0-f70.google.com [209.85.160.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 51673202FA for ; Mon, 17 Mar 2014 18:52:57 +0000 (UTC) Received: by mail-pb0-f70.google.com with SMTP id rp16sf15194421pbb.5 for ; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id:cc :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=snxEFyl8wJhVdfJm7eXytIOcqAuaS87p7GEcS1RsGTs=; b=UqAikjySRu1ehK2Fgn9GACrNPOQi55unG0IqTfZSistWPY/SXXzrFYzJ7iW3TrYwJL CiKGS5BRKqywm9f1aSG4FOOjPInw7a8GFFjaKq14T/uCU9JUWhVMOy/lNmN5mEdVE9F2 nsaXWXGW6LooN1tmnFhfryAhC6pv19/1XuINeI+BhlGd/togt/5T7gjbRvBgHgfYIhCC 87HoIrXnKZLMqCmOod5Q5uvKYPrW1kfNoIibXEdVISLTOA13+HepxTf1qfD05BT68TkF 8UucTngHZntm6c6/o6Y3iZw47xgVIq8fIStVCPZZKE8krga7SDCygAcinG5tb3OGr3Eo LDGg== X-Gm-Message-State: ALoCoQlwusXMI7+5h/IdyzydNOPhx5WmsD4HdBHiHRblunpAQWcox+mlproXFjQZDlAHHYeo1SKp X-Received: by 10.68.134.233 with SMTP id pn9mr9944475pbb.5.1395082376588; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.102.184 with SMTP id w53ls1775992qge.45.gmail; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) X-Received: by 10.52.147.238 with SMTP id tn14mr17564168vdb.23.1395082376286; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx.google.com with ESMTPS id z15si5593204vcj.111.2014.03.17.11.52.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 17 Mar 2014 11:52:56 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.181; Received: by mail-vc0-f181.google.com with SMTP id id10so6110541vcb.40 for ; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) X-Received: by 10.221.34.211 with SMTP id st19mr21006794vcb.5.1395082376155; Mon, 17 Mar 2014 11:52:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp149193vck; Mon, 17 Mar 2014 11:52:55 -0700 (PDT) X-Received: by 10.194.242.231 with SMTP id wt7mr3460168wjc.52.1395082374502; Mon, 17 Mar 2014 11:52:54 -0700 (PDT) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id q2si5695850wif.78.2014.03.17.11.52.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Mar 2014 11:52:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WPceN-00061a-PX; Mon, 17 Mar 2014 18:52:39 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WPceL-0003jt-LF; Mon, 17 Mar 2014 18:52:37 +0000 Received: from mail-wg0-f52.google.com ([74.125.82.52]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WPceH-0003iP-Js for linux-arm-kernel@lists.infradead.org; Mon, 17 Mar 2014 18:52:34 +0000 Received: by mail-wg0-f52.google.com with SMTP id k14so4936364wgh.23 for ; Mon, 17 Mar 2014 11:52:11 -0700 (PDT) X-Received: by 10.180.165.238 with SMTP id zb14mr10999796wib.51.1395082330779; Mon, 17 Mar 2014 11:52:10 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id t5sm40585608wjw.15.2014.03.17.11.52.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Mar 2014 11:52:10 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RESEND PATCH] mm: hugetlb: Introduce huge_pte_{page, present, young} Date: Mon, 17 Mar 2014 18:51:58 +0000 Message-Id: <1395082318-7703-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140317_145233_750069_974026AF X-CRM114-Status: GOOD ( 14.11 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.52 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: catalin.marinas@arm.com, akpm@linux-foundation.org, Steve Capper X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.181 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Introduce huge pte versions of pte_page, pte_present and pte_young. This allows ARM (without LPAE) to use alternative pte processing logic for huge ptes. Where these functions are not defined by architectural code they fallback to the standard functions. Signed-off-by: Steve Capper --- Hi, I'm resending this patch to provoke some discussion. We already have some huge_pte_ style functions, and this patch adds a few more (that simplify to the pte_ equivalents where unspecified). Having separate hugetlb versions of pte_page, present and mkyoung allows for a greatly simplified huge page implementation for ARM with the classical MMU (which has a different bit layout for huge ptes). Cheers, diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 8c43cc4..4992487 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -353,6 +353,18 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, } #endif +#ifndef huge_pte_page +#define huge_pte_page(pte) pte_page(pte) +#endif + +#ifndef huge_pte_present +#define huge_pte_present(pte) pte_present(pte) +#endif + +#ifndef huge_pte_mkyoung +#define huge_pte_mkyoung(pte) pte_mkyoung(pte) +#endif + static inline struct hstate *page_hstate(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c01cb9f..d1a38c9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2319,7 +2319,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page, entry = huge_pte_wrprotect(mk_huge_pte(page, vma->vm_page_prot)); } - entry = pte_mkyoung(entry); + entry = huge_pte_mkyoung(entry); entry = pte_mkhuge(entry); entry = arch_make_huge_pte(entry, vma, page, writable); @@ -2379,7 +2379,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, if (cow) huge_ptep_set_wrprotect(src, addr, src_pte); entry = huge_ptep_get(src_pte); - ptepage = pte_page(entry); + ptepage = huge_pte_page(entry); get_page(ptepage); page_dup_rmap(ptepage); set_huge_pte_at(dst, addr, dst_pte, entry); @@ -2398,7 +2398,7 @@ static int is_hugetlb_entry_migration(pte_t pte) { swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) + if (huge_pte_none(pte) || huge_pte_present(pte)) return 0; swp = pte_to_swp_entry(pte); if (non_swap_entry(swp) && is_migration_entry(swp)) @@ -2411,7 +2411,7 @@ static int is_hugetlb_entry_hwpoisoned(pte_t pte) { swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) + if (huge_pte_none(pte) || huge_pte_present(pte)) return 0; swp = pte_to_swp_entry(pte); if (non_swap_entry(swp) && is_hwpoison_entry(swp)) @@ -2464,7 +2464,7 @@ again: goto unlock; } - page = pte_page(pte); + page = huge_pte_page(pte); /* * If a reference page is supplied, it is because a specific * page is being unmapped, not a range. Ensure the page we @@ -2614,7 +2614,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long mmun_start; /* For mmu_notifiers */ unsigned long mmun_end; /* For mmu_notifiers */ - old_page = pte_page(pte); + old_page = huge_pte_page(pte); retry_avoidcopy: /* If no-one else is actually using this page, avoid the copy @@ -2965,7 +2965,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * Note that locking order is always pagecache_page -> page, * so no worry about deadlock. */ - page = pte_page(entry); + page = huge_pte_page(entry); get_page(page); if (page != pagecache_page) lock_page(page); @@ -2985,7 +2985,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, } entry = huge_pte_mkdirty(entry); } - entry = pte_mkyoung(entry); + entry = huge_pte_mkyoung(entry); if (huge_ptep_set_access_flags(vma, address, ptep, entry, flags & FAULT_FLAG_WRITE)) update_mmu_cache(vma, address, ptep); @@ -3077,7 +3077,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, } pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; - page = pte_page(huge_ptep_get(pte)); + page = huge_pte_page(huge_ptep_get(pte)); same_page: if (pages) { pages[i] = mem_map_offset(page, pfn_offset); @@ -3425,7 +3425,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, { struct page *page; - page = pte_page(*(pte_t *)pmd); + page = huge_pte_page(*(pte_t *)pmd); if (page) page += ((address & ~PMD_MASK) >> PAGE_SHIFT); return page; @@ -3437,7 +3437,7 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address, { struct page *page; - page = pte_page(*(pte_t *)pud); + page = huge_pte_page(*(pte_t *)pud); if (page) page += ((address & ~PUD_MASK) >> PAGE_SHIFT); return page;