From patchwork Fri Dec 13 19:05:41 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 22340 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qe0-f71.google.com (mail-qe0-f71.google.com [209.85.128.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 48259202E2 for ; Fri, 13 Dec 2013 19:05:58 +0000 (UTC) Received: by mail-qe0-f71.google.com with SMTP id b10sf4187264qen.10 for ; Fri, 13 Dec 2013 11:05:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=4ToiLNxzwrcK8nNY01iF20RcwOji1PtQGbB8Ty3VaQs=; b=ZfO5FphqGKD9GdLCp84SifM/1atxLvfGzm1bncxXp2txRkBAbIuLFfbkMHoQ4m5Pz4 6fRb9cysAY2UnyGDrgnGDHGGLtJMV6emz0oUEB4ii70yFuia2zbrCAKVOwAJEsENeClD vCCdYGOkM8pKul6hOji+ITqNoZU4XyhHtiFg0FSZSEpvfW6oDrbHDmaL9LyF2jLnZZzd WRDRwncLdTcEw3LPHEWZMJ/ToP05QYBOy9CmZqGnfZtaly9slJmML2nGLr1G2GKRclB1 OVoPyMRZBJsVbwdXojNvjuoJktz2PkAMakFMQjgefRVWrm4F2Bd4dAXfddD9TBnZPlXc OX1w== X-Gm-Message-State: ALoCoQm8pMSYhx5ba635GEiu4RKlJHXFQFewjOhsGPW5nOwjul/CyQwfXWkSxzUHw59ElmCW1Yov X-Received: by 10.236.21.132 with SMTP id r4mr1431945yhr.7.1386961558115; Fri, 13 Dec 2013 11:05:58 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.15.234 with SMTP id a10ls1155530qed.92.gmail; Fri, 13 Dec 2013 11:05:57 -0800 (PST) X-Received: by 10.58.216.74 with SMTP id oo10mr1965129vec.0.1386961557762; Fri, 13 Dec 2013 11:05:57 -0800 (PST) Received: from mail-ve0-f171.google.com (mail-ve0-f171.google.com [209.85.128.171]) by mx.google.com with ESMTPS id us10si1048928vcb.134.2013.12.13.11.05.57 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 11:05:57 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.171; Received: by mail-ve0-f171.google.com with SMTP id pa12so1667004veb.2 for ; Fri, 13 Dec 2013 11:05:57 -0800 (PST) X-Received: by 10.52.165.131 with SMTP id yy3mr1588709vdb.25.1386961557584; Fri, 13 Dec 2013 11:05:57 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp61422vcz; Fri, 13 Dec 2013 11:05:57 -0800 (PST) X-Received: by 10.180.105.199 with SMTP id go7mr3737180wib.53.1386961556366; Fri, 13 Dec 2013 11:05:56 -0800 (PST) Received: from mail-wg0-f46.google.com (mail-wg0-f46.google.com [74.125.82.46]) by mx.google.com with ESMTPS id lz3si1252544wjb.123.2013.12.13.11.05.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 11:05:56 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.46 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=74.125.82.46; Received: by mail-wg0-f46.google.com with SMTP id m15so2256657wgh.13 for ; Fri, 13 Dec 2013 11:05:55 -0800 (PST) X-Received: by 10.194.5.7 with SMTP id o7mr3666921wjo.17.1386961555820; Fri, 13 Dec 2013 11:05:55 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id o9sm289604wib.10.2013.12.13.11.05.54 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Dec 2013 11:05:55 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Cc: linux@arm.linux.org.uk, will.deacon@arm.com, catalin.marinas@arm.com, patches@linaro.org, robherring2@gmail.com, deepak.saxena@linaro.org, Steve Capper Subject: [RFC PATCH 1/6] mm: hugetlb: Introduce huge_pte_page and huge_pte_present Date: Fri, 13 Dec 2013 19:05:41 +0000 Message-Id: <1386961546-10061-2-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> References: <1386961546-10061-1-git-send-email-steve.capper@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Introduce huge pte versions of pte_page and pte_present. This allows ARM (without LPAE) to use alternative pte processing logic for huge ptes. Where these functions are not defined by architectural code they fallback to the standard pte_page and pte_present functions. Signed-off-by: Steve Capper --- include/linux/hugetlb.h | 8 ++++++++ mm/hugetlb.c | 18 +++++++++--------- 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 9649ff0..857c298 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -355,6 +355,14 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, } #endif +#ifndef huge_pte_page +#define huge_pte_page(pte) pte_page(pte) +#endif + +#ifndef huge_pte_present +#define huge_pte_present(pte) pte_present(pte) +#endif + static inline struct hstate *page_hstate(struct page *page) { return size_to_hstate(PAGE_SIZE << compound_order(page)); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dee6cf4..b725f21 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2378,7 +2378,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, if (cow) huge_ptep_set_wrprotect(src, addr, src_pte); entry = huge_ptep_get(src_pte); - ptepage = pte_page(entry); + ptepage = huge_pte_page(entry); get_page(ptepage); page_dup_rmap(ptepage); set_huge_pte_at(dst, addr, dst_pte, entry); @@ -2396,7 +2396,7 @@ static int is_hugetlb_entry_migration(pte_t pte) { swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) + if (huge_pte_none(pte) || huge_pte_present(pte)) return 0; swp = pte_to_swp_entry(pte); if (non_swap_entry(swp) && is_migration_entry(swp)) @@ -2409,7 +2409,7 @@ static int is_hugetlb_entry_hwpoisoned(pte_t pte) { swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) + if (huge_pte_none(pte) || huge_pte_present(pte)) return 0; swp = pte_to_swp_entry(pte); if (non_swap_entry(swp) && is_hwpoison_entry(swp)) @@ -2462,7 +2462,7 @@ again: goto unlock; } - page = pte_page(pte); + page = huge_pte_page(pte); /* * If a reference page is supplied, it is because a specific * page is being unmapped, not a range. Ensure the page we @@ -2612,7 +2612,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long mmun_start; /* For mmu_notifiers */ unsigned long mmun_end; /* For mmu_notifiers */ - old_page = pte_page(pte); + old_page = huge_pte_page(pte); retry_avoidcopy: /* If no-one else is actually using this page, avoid the copy @@ -2963,7 +2963,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * Note that locking order is always pagecache_page -> page, * so no worry about deadlock. */ - page = pte_page(entry); + page = huge_pte_page(entry); get_page(page); if (page != pagecache_page) lock_page(page); @@ -3075,7 +3075,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, } pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; - page = pte_page(huge_ptep_get(pte)); + page = huge_pte_page(huge_ptep_get(pte)); same_page: if (pages) { pages[i] = mem_map_offset(page, pfn_offset); @@ -3423,7 +3423,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, { struct page *page; - page = pte_page(*(pte_t *)pmd); + page = huge_pte_page(*(pte_t *)pmd); if (page) page += ((address & ~PMD_MASK) >> PAGE_SHIFT); return page; @@ -3435,7 +3435,7 @@ follow_huge_pud(struct mm_struct *mm, unsigned long address, { struct page *page; - page = pte_page(*(pte_t *)pud); + page = huge_pte_page(*(pte_t *)pud); if (page) page += ((address & ~PUD_MASK) >> PAGE_SHIFT); return page;