Message ID | 20220610154013.68259-1-jlayton@kernel.org |
---|---|
State | New |
Headers | show |
Series | ceph: switch back to testing for NULL folio->private in ceph_dirty_folio | expand |
On 6/10/22 11:40 PM, Jeff Layton wrote: > Willy requested that we change this back to warning on folio->private > being non-NULl. He's trying to kill off the PG_private flag, and so we'd > like to catch where it's non-NULL. > > Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to > using that instead of VM_BUG_ON_FOLIO along with testing the ->private > pointer. > > Cc: Matthew Wilcox <willy@infradead.org> > Signed-off-by: Jeff Layton <jlayton@kernel.org> > --- > fs/ceph/addr.c | 2 +- > include/linux/mmdebug.h | 9 +++++++++ > 2 files changed, 10 insertions(+), 1 deletion(-) > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c > index b43cc01a61db..b24d6bdb91db 100644 > --- a/fs/ceph/addr.c > +++ b/fs/ceph/addr.c > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > * Reference snap context in folio->private. Also set > * PagePrivate so that we get invalidate_folio callback. > */ > - VM_BUG_ON_FOLIO(folio_test_private(folio), folio); > + VM_WARN_ON_FOLIO(folio->private, folio); > folio_attach_private(folio, snapc); > > return ceph_fscache_dirty_folio(mapping, folio); > diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h > index d7285f8148a3..5107bade2ab2 100644 > --- a/include/linux/mmdebug.h > +++ b/include/linux/mmdebug.h > @@ -54,6 +54,15 @@ void dump_mm(const struct mm_struct *mm); > } \ > unlikely(__ret_warn_once); \ > }) > +#define VM_WARN_ON_FOLIO(cond, folio) ({ \ > + int __ret_warn = !!(cond); \ > + \ > + if (unlikely(__ret_warn)) { \ > + dump_page(&folio->page, "VM_WARN_ON_FOLIO(" __stringify(cond)")");\ > + WARN_ON(1); \ > + } \ > + unlikely(__ret_warn); \ > +}) > #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ > static bool __section(".data.once") __warned; \ > int __ret_warn_once = !!(cond); \ All tests passed except the known issues. Merged into testing branch, thanks Jeff! -- Xiubo
On 6/10/22 11:40 PM, Jeff Layton wrote: > Willy requested that we change this back to warning on folio->private > being non-NULl. He's trying to kill off the PG_private flag, and so we'd > like to catch where it's non-NULL. > > Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to > using that instead of VM_BUG_ON_FOLIO along with testing the ->private > pointer. > > Cc: Matthew Wilcox <willy@infradead.org> > Signed-off-by: Jeff Layton <jlayton@kernel.org> > --- > fs/ceph/addr.c | 2 +- > include/linux/mmdebug.h | 9 +++++++++ > 2 files changed, 10 insertions(+), 1 deletion(-) > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c > index b43cc01a61db..b24d6bdb91db 100644 > --- a/fs/ceph/addr.c > +++ b/fs/ceph/addr.c > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > * Reference snap context in folio->private. Also set > * PagePrivate so that we get invalidate_folio callback. > */ > - VM_BUG_ON_FOLIO(folio_test_private(folio), folio); > + VM_WARN_ON_FOLIO(folio->private, folio); > folio_attach_private(folio, snapc); > > return ceph_fscache_dirty_folio(mapping, folio); > diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h > index d7285f8148a3..5107bade2ab2 100644 > --- a/include/linux/mmdebug.h > +++ b/include/linux/mmdebug.h > @@ -54,6 +54,15 @@ void dump_mm(const struct mm_struct *mm); > } \ > unlikely(__ret_warn_once); \ > }) > +#define VM_WARN_ON_FOLIO(cond, folio) ({ \ > + int __ret_warn = !!(cond); \ > + \ > + if (unlikely(__ret_warn)) { \ > + dump_page(&folio->page, "VM_WARN_ON_FOLIO(" __stringify(cond)")");\ > + WARN_ON(1); \ > + } \ > + unlikely(__ret_warn); \ > +}) I have fixed the compile warning reported by kernel test robot by defining it in case the DEBUG_VM is disabled in testing branch. -- Xiubo > #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ > static bool __section(".data.once") __warned; \ > int __ret_warn_once = !!(cond); \
On Mon, Jun 13, 2022 at 08:48:40AM +0800, Xiubo Li wrote: > > On 6/10/22 11:40 PM, Jeff Layton wrote: > > Willy requested that we change this back to warning on folio->private > > being non-NULl. He's trying to kill off the PG_private flag, and so we'd > > like to catch where it's non-NULL. > > > > Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to > > using that instead of VM_BUG_ON_FOLIO along with testing the ->private > > pointer. > > > > Cc: Matthew Wilcox <willy@infradead.org> > > Signed-off-by: Jeff Layton <jlayton@kernel.org> > > --- > > fs/ceph/addr.c | 2 +- > > include/linux/mmdebug.h | 9 +++++++++ > > 2 files changed, 10 insertions(+), 1 deletion(-) > > > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c > > index b43cc01a61db..b24d6bdb91db 100644 > > --- a/fs/ceph/addr.c > > +++ b/fs/ceph/addr.c > > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > > * Reference snap context in folio->private. Also set > > * PagePrivate so that we get invalidate_folio callback. > > */ > > - VM_BUG_ON_FOLIO(folio_test_private(folio), folio); > > + VM_WARN_ON_FOLIO(folio->private, folio); > > folio_attach_private(folio, snapc); > > return ceph_fscache_dirty_folio(mapping, folio); I found a couple of places where page->private needs to be NULLed out. Neither of them are Ceph's fault. I decided that testing whether folio->private and PG_private are in agreement was better done in folio_unlock() than in any of the other potential places we could check for it. diff --git a/mm/filemap.c b/mm/filemap.c index 8ef861297ffb..acef71f75e78 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1535,6 +1535,9 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_private(folio) && + !folio_test_swapbacked(folio) && + folio_get_private(folio), folio); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) folio_wake_bit(folio, PG_locked); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2e2a8b5bc567..af0751a79c19 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2438,6 +2438,7 @@ static void __split_huge_page_tail(struct page *head, int tail, page_tail); page_tail->mapping = head->mapping; page_tail->index = head->index + tail; + page_tail->private = 0; /* Page flags must be visible before we make the page non-compound. */ smp_wmb(); diff --git a/mm/migrate.c b/mm/migrate.c index eb62e026c501..fa8e36e74f0d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1157,6 +1157,8 @@ static int unmap_and_move(new_page_t get_new_page, newpage = get_new_page(page, private); if (!newpage) return -ENOMEM; + BUG_ON(compound_order(newpage) != compound_order(page)); + newpage->private = 0; rc = __unmap_and_move(page, newpage, force, mode); if (rc == MIGRATEPAGE_SUCCESS)
On 6/19/22 11:49 AM, Matthew Wilcox wrote: > On Mon, Jun 13, 2022 at 08:48:40AM +0800, Xiubo Li wrote: >> On 6/10/22 11:40 PM, Jeff Layton wrote: >>> Willy requested that we change this back to warning on folio->private >>> being non-NULl. He's trying to kill off the PG_private flag, and so we'd >>> like to catch where it's non-NULL. >>> >>> Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to >>> using that instead of VM_BUG_ON_FOLIO along with testing the ->private >>> pointer. >>> >>> Cc: Matthew Wilcox <willy@infradead.org> >>> Signed-off-by: Jeff Layton <jlayton@kernel.org> >>> --- >>> fs/ceph/addr.c | 2 +- >>> include/linux/mmdebug.h | 9 +++++++++ >>> 2 files changed, 10 insertions(+), 1 deletion(-) >>> >>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c >>> index b43cc01a61db..b24d6bdb91db 100644 >>> --- a/fs/ceph/addr.c >>> +++ b/fs/ceph/addr.c >>> @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) >>> * Reference snap context in folio->private. Also set >>> * PagePrivate so that we get invalidate_folio callback. >>> */ >>> - VM_BUG_ON_FOLIO(folio_test_private(folio), folio); >>> + VM_WARN_ON_FOLIO(folio->private, folio); >>> folio_attach_private(folio, snapc); >>> return ceph_fscache_dirty_folio(mapping, folio); > I found a couple of places where page->private needs to be NULLed out. > Neither of them are Ceph's fault. I decided that testing whether > folio->private and PG_private are in agreement was better done in > folio_unlock() than in any of the other potential places we could > check for it. Hi Willy, Cool. I will test this patch today. Thanks! -- Xiubo > diff --git a/mm/filemap.c b/mm/filemap.c > index 8ef861297ffb..acef71f75e78 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1535,6 +1535,9 @@ void folio_unlock(struct folio *folio) > BUILD_BUG_ON(PG_waiters != 7); > BUILD_BUG_ON(PG_locked > 7); > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); > + VM_BUG_ON_FOLIO(!folio_test_private(folio) && > + !folio_test_swapbacked(folio) && > + folio_get_private(folio), folio); > if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) > folio_wake_bit(folio, PG_locked); > } > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 2e2a8b5bc567..af0751a79c19 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2438,6 +2438,7 @@ static void __split_huge_page_tail(struct page *head, int tail, > page_tail); > page_tail->mapping = head->mapping; > page_tail->index = head->index + tail; > + page_tail->private = 0; > > /* Page flags must be visible before we make the page non-compound. */ > smp_wmb(); > diff --git a/mm/migrate.c b/mm/migrate.c > index eb62e026c501..fa8e36e74f0d 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1157,6 +1157,8 @@ static int unmap_and_move(new_page_t get_new_page, > newpage = get_new_page(page, private); > if (!newpage) > return -ENOMEM; > + BUG_ON(compound_order(newpage) != compound_order(page)); > + newpage->private = 0; > > rc = __unmap_and_move(page, newpage, force, mode); > if (rc == MIGRATEPAGE_SUCCESS) >
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index b43cc01a61db..b24d6bdb91db 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) * Reference snap context in folio->private. Also set * PagePrivate so that we get invalidate_folio callback. */ - VM_BUG_ON_FOLIO(folio_test_private(folio), folio); + VM_WARN_ON_FOLIO(folio->private, folio); folio_attach_private(folio, snapc); return ceph_fscache_dirty_folio(mapping, folio); diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index d7285f8148a3..5107bade2ab2 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -54,6 +54,15 @@ void dump_mm(const struct mm_struct *mm); } \ unlikely(__ret_warn_once); \ }) +#define VM_WARN_ON_FOLIO(cond, folio) ({ \ + int __ret_warn = !!(cond); \ + \ + if (unlikely(__ret_warn)) { \ + dump_page(&folio->page, "VM_WARN_ON_FOLIO(" __stringify(cond)")");\ + WARN_ON(1); \ + } \ + unlikely(__ret_warn); \ +}) #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ static bool __section(".data.once") __warned; \ int __ret_warn_once = !!(cond); \
Willy requested that we change this back to warning on folio->private being non-NULl. He's trying to kill off the PG_private flag, and so we'd like to catch where it's non-NULL. Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to using that instead of VM_BUG_ON_FOLIO along with testing the ->private pointer. Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Jeff Layton <jlayton@kernel.org> --- fs/ceph/addr.c | 2 +- include/linux/mmdebug.h | 9 +++++++++ 2 files changed, 10 insertions(+), 1 deletion(-)