mbox series

[0/12,v4] fs: Hole punch vs page cache filling races

Message ID 20210423171010.12-1-jack@suse.cz
Headers show
Series fs: Hole punch vs page cache filling races | expand

Message

Jan Kara April 23, 2021, 5:29 p.m. UTC
Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. I think
we are coming close to a complete solution so I've removed the RFC tag from
the subject. I went through all filesystems supporting hole punching and
converted them from their private locks to a generic one (usually fixing the
race ext4 had as a side effect). I also found out ceph & cifs didn't have
any protection from the hole punch vs page fault race either so I've added
appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
read, write, and hole punch paths are not properly serialized possibly leading
to fs corruption. Both issues are non-trivial so respective fs maintainers
have to deal with those (I've informed them and problems were generally
confirmed). Anyway, for all the other filesystem this kind of race should
be closed.

As a next step, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
  supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
  appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache.  We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

								Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz

CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>

Comments

Matthew Wilcox April 23, 2021, 6:30 p.m. UTC | #1
On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range.

One of the things I've had in mind for a while is moving the DAX locked
entry concept into the page cache proper.  It would avoid creating the
new semaphore, at the cost of taking the i_pages lock twice (once to
insert the entries that cover the range, and once to delete the entries).

It'd have pretty much the same effect, though -- read/fault/... would
block until the entry was deleted from the page cache.
Dave Chinner April 23, 2021, 11:04 p.m. UTC | #2
On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range. inode->i_rwsem does not work because page fault happens under
> mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> the performance for mixed read-write workloads suffer.
> 
> So create a new rw_semaphore in the address_space - invalidate_lock -
> that protects adding of pages to page cache for page faults / reads /
> readahead.
.....
> diff --git a/fs/inode.c b/fs/inode.c
> index a047ab306f9a..43596dd8b61e 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
>  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
>  	mapping->private_data = NULL;
>  	mapping->writeback_index = 0;
> +	init_rwsem(&mapping->invalidate_lock);
> +	lockdep_set_class(&mapping->invalidate_lock,
> +			  &sb->s_type->invalidate_lock_key);
>  	inode->i_private = NULL;
>  	inode->i_mapping = mapping;
>  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */

Oh, lockdep. That might be a problem here.

The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
tracked as nesting properly against the IOLOCK and the ILOCK. When
you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
add subclass annotations to the lock and they are going to be
different to the locking that the VFS does.

We'll see this from xfs_lock_two_inodes() (e.g. in
xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
oper.....

Oooooh. The page cache copy done when breaking a shared extent needs
to lock out page faults on both the source and destination, but it
still needs to be able to populate the page cache of both the source
and destination file.....

.... and vfs_dedupe_file_range_compare() has to be able to read
pages from both the source and destination file to determine that
the contents are identical and that's done while we hold the
XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
user data modification operations being run....

I now have many doubts that this "serialise page faults by locking
out page cache instantiation" method actually works as a generic
mechanism. It's not just page cache invalidation that relies on
being able to lock out page faults: copy-on-write and deduplication
both require the ability to populate the page cache with source data
while page faults are locked out so the data can be compared/copied
atomically with the extent level manipulations and so user data
modifications cannot occur until the physical extent manipulation
operation has completed.

Having only just realised this is a problem, no solution has
immediately popped into my mind. I'll chew on it over the weekend,
but I'm not hopeful at this point...

Cheers,

Dave.
Matthew Wilcox April 23, 2021, 11:51 p.m. UTC | #3
On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> I've got to spend time now reconstructing the patchset into a single

> series because the delivery has been spread across three different

> mailing lists and so hit 3 different procmail filters.  I'll comment

> on the patches once I've reconstructed the series and read through

> it as a whole...


$ b4 mbox 20210423171010.12-1-jack@suse.cz
Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
Grabbing thread from lore.kernel.org/ceph-devel
6 messages in the thread
Saved ./20210423171010.12-1-jack@suse.cz.mbx
Christoph Hellwig April 24, 2021, 6:11 a.m. UTC | #4
On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote:
> On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:

> > I've got to spend time now reconstructing the patchset into a single

> > series because the delivery has been spread across three different

> > mailing lists and so hit 3 different procmail filters.  I'll comment

> > on the patches once I've reconstructed the series and read through

> > it as a whole...

> 

> $ b4 mbox 20210423171010.12-1-jack@suse.cz

> Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz

> Grabbing thread from lore.kernel.org/ceph-devel

> 6 messages in the thread

> Saved ./20210423171010.12-1-jack@suse.cz.mbx


Yikes.  Just send them damn mails.  Or switch the lists to NNTP, but
don't let the people who are reviewing your patches do stupid work
with weird tools.
Jan Kara April 26, 2021, 3:46 p.m. UTC | #5
On Sat 24-04-21 09:04:49, Dave Chinner wrote:
> On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:

> > Currently, serializing operations such as page fault, read, or readahead

> > against hole punching is rather difficult. The basic race scheme is

> > like:

> > 

> > fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..

> >   truncate_inode_pages_range()

> > 						  <create pages in page

> > 						   cache here>

> >   <update fs block mapping and free blocks>

> > 

> > Now the problem is in this way read / page fault / readahead can

> > instantiate pages in page cache with potentially stale data (if blocks

> > get quickly reused). Avoiding this race is not simple - page locks do

> > not work because we want to make sure there are *no* pages in given

> > range. inode->i_rwsem does not work because page fault happens under

> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes

> > the performance for mixed read-write workloads suffer.

> > 

> > So create a new rw_semaphore in the address_space - invalidate_lock -

> > that protects adding of pages to page cache for page faults / reads /

> > readahead.

> .....

> > diff --git a/fs/inode.c b/fs/inode.c

> > index a047ab306f9a..43596dd8b61e 100644

> > --- a/fs/inode.c

> > +++ b/fs/inode.c

> > @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)

> >  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);

> >  	mapping->private_data = NULL;

> >  	mapping->writeback_index = 0;

> > +	init_rwsem(&mapping->invalidate_lock);

> > +	lockdep_set_class(&mapping->invalidate_lock,

> > +			  &sb->s_type->invalidate_lock_key);

> >  	inode->i_private = NULL;

> >  	inode->i_mapping = mapping;

> >  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */

> 

> Oh, lockdep. That might be a problem here.

> 

> The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is

> tracked as nesting properly against the IOLOCK and the ILOCK. When

> you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will

> add subclass annotations to the lock and they are going to be

> different to the locking that the VFS does.

> 

> We'll see this from xfs_lock_two_inodes() (e.g. in

> xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink

> oper.....


Thanks for the pointer. I was kind of wondering what lockdep nesting games
XFS plays but then forgot to look into details. Anyway, I've preserved the
nesting annotations in XFS and fstests run on XFS passed without lockdep
complaining so there isn't at least an obvious breakage. Also as far as I'm
checking the code XFS usage in and lock nesting of MMAPLOCK should be
compatible with the nesting VFS enforces (also see below)...
 
> Oooooh. The page cache copy done when breaking a shared extent needs

> to lock out page faults on both the source and destination, but it

> still needs to be able to populate the page cache of both the source

> and destination file.....

> 

> .... and vfs_dedupe_file_range_compare() has to be able to read

> pages from both the source and destination file to determine that

> the contents are identical and that's done while we hold the

> XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other

> user data modification operations being run....


So I started wondering why fstests passed when reading this :) The reason
is that vfs_dedupe_get_page() does not use standard page cache filling path
(neither readahead API nor filemap_read()), instead it uses
read_mapping_page() and so gets into page cache filling path below the
level at which we get invalidate_lock and thus everything works as it
should. So read_mapping_page() is similar to places like e.g.
block_truncate_page() or block_write_begin() which may end up filling in
page cache contents but they rely on upper layers to already hold
appropriate locks. I'll add a comment to read_mapping_page() about this.
Once all filesystems are converted to use invalidate_lock, I also want to
add WARN_ON_ONCE() to various places verifying that invalidate_lock is held
as it should...
 
> I now have many doubts that this "serialise page faults by locking

> out page cache instantiation" method actually works as a generic

> mechanism. It's not just page cache invalidation that relies on

> being able to lock out page faults: copy-on-write and deduplication

> both require the ability to populate the page cache with source data

> while page faults are locked out so the data can be compared/copied

> atomically with the extent level manipulations and so user data

> modifications cannot occur until the physical extent manipulation

> operation has completed.


Hum, that is a good point. So there are actually two different things you
want to block at different places:

1) You really want to block page cache instantiation for operations such as
hole punch as that operation mutates data and thus contents would become
stale.

2) You want to block page cache *modification* for operations such as
dedupe while keeping page cache in place. This is somewhat different
requirement but invalidate_lock can, in principle, cover it as well.
Basically we just need to keep invalidate_lock usage in .page_mkwrite
helpers. The question remains whether invalidate_lock is still a good name
with this usage in mind and I probably need to update a documentation to
reflect this usage.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR