diff mbox series

mm: Remove bogus VM_BUG_ON

Message ID 20210818144932.940640-1-willy@infradead.org
State New
Headers show
Series mm: Remove bogus VM_BUG_ON | expand

Commit Message

Matthew Wilcox Aug. 18, 2021, 2:49 p.m. UTC
It is not safe to check page->index without holding the page lock.
It can be changed if the page is moved between the swap cache and the
page cache for a shmem file, for example.  There is a VM_BUG_ON below
which checks page->index is correct after taking the page lock.

Cc: stable@vger.kernel.org
Fixes: 5c211ba29deb ("mm: add and use find_lock_entries")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/filemap.c | 1 -
 1 file changed, 1 deletion(-)

Comments

Hugh Dickins Aug. 18, 2021, 4:34 p.m. UTC | #1
On Wed, 18 Aug 2021, Matthew Wilcox (Oracle) wrote:

> It is not safe to check page->index without holding the page lock.
> It can be changed if the page is moved between the swap cache and the
> page cache for a shmem file, for example.  There is a VM_BUG_ON below
> which checks page->index is correct after taking the page lock.
> 
> Cc: stable@vger.kernel.org
> Fixes: 5c211ba29deb ("mm: add and use find_lock_entries")

I don't mind that VM_BUG_ON_PAGE() being removed, but question whether
this Fixes anything, and needs to go to stable. Or maybe it's just that
the shmem example is wrong - moving shmem from page to swap cache does
not change page->index. Or maybe you have later changes in your tree
which change that and do require this. Otherwise, I'll have to worry
why my testing has missed it for six months.

Hugh

> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/filemap.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index d1458ecf2f51..34de0b14aaa9 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2033,17 +2033,16 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
>  	XA_STATE(xas, &mapping->i_pages, start);
>  	struct page *page;
>  
>  	rcu_read_lock();
>  	while ((page = find_get_entry(&xas, end, XA_PRESENT))) {
>  		if (!xa_is_value(page)) {
>  			if (page->index < start)
>  				goto put;
> -			VM_BUG_ON_PAGE(page->index != xas.xa_index, page);
>  			if (page->index + thp_nr_pages(page) - 1 > end)
>  				goto put;
>  			if (!trylock_page(page))
>  				goto put;
>  			if (page->mapping != mapping || PageWriteback(page))
>  				goto unlock;
>  			VM_BUG_ON_PAGE(!thp_contains(page, xas.xa_index),
>  					page);
> -- 
> 2.30.2
Matthew Wilcox Aug. 18, 2021, 4:45 p.m. UTC | #2
On Wed, Aug 18, 2021 at 09:34:51AM -0700, Hugh Dickins wrote:
> On Wed, 18 Aug 2021, Matthew Wilcox (Oracle) wrote:
> 
> > It is not safe to check page->index without holding the page lock.
> > It can be changed if the page is moved between the swap cache and the
> > page cache for a shmem file, for example.  There is a VM_BUG_ON below
> > which checks page->index is correct after taking the page lock.
> > 
> > Cc: stable@vger.kernel.org
> > Fixes: 5c211ba29deb ("mm: add and use find_lock_entries")
> 
> I don't mind that VM_BUG_ON_PAGE() being removed, but question whether
> this Fixes anything, and needs to go to stable. Or maybe it's just that
> the shmem example is wrong - moving shmem from page to swap cache does
> not change page->index. Or maybe you have later changes in your tree
> which change that and do require this. Otherwise, I'll have to worry
> why my testing has missed it for six months.

I'm sorry, I think you're going to have to worry :-(  Syzbot found
it initially:

https://lore.kernel.org/linux-mm/0000000000009cfcda05c926b34b@google.com/

and then I hit it today during my testing (which is definitely due to
further changes in my tree).

I should have added:

Reported-by: syzbot+c87be4f669d920c76330@syzkaller.appspotmail.com
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index d1458ecf2f51..34de0b14aaa9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2033,17 +2033,16 @@  unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
 	XA_STATE(xas, &mapping->i_pages, start);
 	struct page *page;
 
 	rcu_read_lock();
 	while ((page = find_get_entry(&xas, end, XA_PRESENT))) {
 		if (!xa_is_value(page)) {
 			if (page->index < start)
 				goto put;
-			VM_BUG_ON_PAGE(page->index != xas.xa_index, page);
 			if (page->index + thp_nr_pages(page) - 1 > end)
 				goto put;
 			if (!trylock_page(page))
 				goto put;
 			if (page->mapping != mapping || PageWriteback(page))
 				goto unlock;
 			VM_BUG_ON_PAGE(!thp_contains(page, xas.xa_index),
 					page);