Message ID | 20220913195508.3511038-4-opendmb@gmail.com |
---|---|
State | New |
Headers | show |
Series | [01/21] mm/page_isolation: protect cma from isolate_single_pageblock | expand |
On 9/13/2022 4:34 PM, Matthew Wilcox wrote: > On Tue, Sep 13, 2022 at 12:54:50PM -0700, Doug Berger wrote: >> With gigantic pages it may not be true that struct page structures >> are contiguous across the entire gigantic page. The mem_map_offset >> function is used here in place of direct pointer arithmetic to >> correct for this. > > We're just eliminating mem_map_offset(). Please use nth_page() > instead.That's good to know. I will include that in v2. > >> for (i = 0; i < pages_per_huge_page(h); >> i += pages_per_huge_page(target_hstate)) { >> + subpage = mem_map_offset(page, i); >> if (hstate_is_gigantic(target_hstate))
On 09/13/22 18:07, Doug Berger wrote: > On 9/13/2022 4:34 PM, Matthew Wilcox wrote: > > On Tue, Sep 13, 2022 at 12:54:50PM -0700, Doug Berger wrote: > > > With gigantic pages it may not be true that struct page structures > > > are contiguous across the entire gigantic page. The mem_map_offset > > > function is used here in place of direct pointer arithmetic to > > > correct for this. > > > > We're just eliminating mem_map_offset(). Please use nth_page() > > instead.That's good to know. I will include that in v2. Thanks Doug and Matthew. I will take a closer look at this series soon. It seems like this patch is a fix independent of the series. If so, I would suggest sending separate to make it easy for backports to stable.
On 9/14/2022 10:08 AM, Mike Kravetz wrote: > On 09/13/22 18:07, Doug Berger wrote: >> On 9/13/2022 4:34 PM, Matthew Wilcox wrote: >>> On Tue, Sep 13, 2022 at 12:54:50PM -0700, Doug Berger wrote: >>>> With gigantic pages it may not be true that struct page structures >>>> are contiguous across the entire gigantic page. The mem_map_offset >>>> function is used here in place of direct pointer arithmetic to >>>> correct for this. >>> >>> We're just eliminating mem_map_offset(). Please use nth_page() >>> instead.That's good to know. I will include that in v2. > > Thanks Doug and Matthew. I will take a closer look at this series soon. > > It seems like this patch is a fix independent of the series. If so, I > would suggest sending separate to make it easy for backports to stable. Yes, as I noted in [PATCH 00/21] the first three patches fit that description, but I included them here in case someone was brave enough to attempt to use this patch set. They were in my branch for my own testing. Full disclosure: An earlier version of this patch set had more complete support for hugepage isolation that included migrating the isolation state when demoting a hugepage that touched lines in demote_free_huge_page() and depended on the subpage variable introduced here. At this point I will submit a patch for this on its own and will likely remove the first three commits when submitting V2 of the set. Thanks for your consideration. -Doug
> On Sep 14, 2022, at 03:54, Doug Berger <opendmb@gmail.com> wrote: > > With gigantic pages it may not be true that struct page structures > are contiguous across the entire gigantic page. The mem_map_offset > function is used here in place of direct pointer arithmetic to > correct for this. > > Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support") > Signed-off-by: Doug Berger <opendmb@gmail.com> With Matthew’s suggestion. Acked-by: Muchun Song <songmuchun@bytedance.com> Thanks.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 79949893ac12..a1d51a1f0404 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3420,6 +3420,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct page *subpage; int rc = 0; target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); @@ -3453,15 +3454,16 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) mutex_lock(&target_hstate->resize_lock); for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { + subpage = mem_map_offset(page, i); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(page + i, + prep_compound_gigantic_page_for_demote(subpage, target_hstate->order); else - prep_compound_page(page + i, target_hstate->order); - set_page_private(page + i, 0); - set_page_refcounted(page + i); - prep_new_huge_page(target_hstate, page + i, nid); - put_page(page + i); + prep_compound_page(subpage, target_hstate->order); + set_page_private(subpage, 0); + set_page_refcounted(subpage); + prep_new_huge_page(target_hstate, subpage, nid); + put_page(subpage); } mutex_unlock(&target_hstate->resize_lock);
With gigantic pages it may not be true that struct page structures are contiguous across the entire gigantic page. The mem_map_offset function is used here in place of direct pointer arithmetic to correct for this. Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support") Signed-off-by: Doug Berger <opendmb@gmail.com> --- mm/hugetlb.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)