Message ID | 20220207050340.872893-1-xiubli@redhat.com |
---|---|
State | New |
Headers | show |
Series | ceph: fail the request directly if handle_reply gets an ESTALE | expand |
On Mon, Feb 7, 2022 at 7:12 AM Jeff Layton <jlayton@kernel.org> wrote: > > On Mon, 2022-02-07 at 13:03 +0800, xiubli@redhat.com wrote: > > From: Xiubo Li <xiubli@redhat.com> > > > > If MDS return ESTALE, that means the MDS has already iterated all > > the possible active MDSes including the auth MDS or the inode is > > under purging. No need to retry in auth MDS and will just return > > ESTALE directly. > > > > When you say "purging" here, do you mean that it's effectively being > cleaned up after being unlinked? Or is it just being purged from the > MDS's cache? > > > Or it will cause definite loop for retrying it. > > > > URL: https://tracker.ceph.com/issues/53504 > > Signed-off-by: Xiubo Li <xiubli@redhat.com> > > --- > > fs/ceph/mds_client.c | 29 ----------------------------- > > 1 file changed, 29 deletions(-) > > > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c > > index 93e5e3c4ba64..c918d2ac8272 100644 > > --- a/fs/ceph/mds_client.c > > +++ b/fs/ceph/mds_client.c > > @@ -3368,35 +3368,6 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) > > > > result = le32_to_cpu(head->result); > > > > - /* > > - * Handle an ESTALE > > - * if we're not talking to the authority, send to them > > - * if the authority has changed while we weren't looking, > > - * send to new authority > > - * Otherwise we just have to return an ESTALE > > - */ > > - if (result == -ESTALE) { > > - dout("got ESTALE on request %llu\n", req->r_tid); > > - req->r_resend_mds = -1; > > - if (req->r_direct_mode != USE_AUTH_MDS) { > > - dout("not using auth, setting for that now\n"); > > - req->r_direct_mode = USE_AUTH_MDS; > > - __do_request(mdsc, req); > > - mutex_unlock(&mdsc->mutex); > > - goto out; > > - } else { > > - int mds = __choose_mds(mdsc, req, NULL); > > - if (mds >= 0 && mds != req->r_session->s_mds) { > > - dout("but auth changed, so resending\n"); > > - __do_request(mdsc, req); > > - mutex_unlock(&mdsc->mutex); > > - goto out; > > - } > > - } > > - dout("have to return ESTALE on request %llu\n", req->r_tid); > > - } > > - > > - > > if (head->safe) { > > set_bit(CEPH_MDS_R_GOT_SAFE, &req->r_req_flags); > > __unregister_request(mdsc, req); > > > (cc'ing Greg, Sage and Zheng) > > This patch sort of contradicts the original design, AFAICT, and I'm not > sure what the correct behavior should be. I could use some > clarification. > > The original code (from the 2009 merge) would tolerate 2 ESTALEs before > giving up and returning that to userland. Then in 2010, Greg added this > commit: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e55b71f802fd448a79275ba7b263fe1a8639be5f > > ...which would presumably make it retry indefinitely as long as the auth > MDS kept changing. Then, Zheng made this change in 2013: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ca18bede048e95a749d13410ce1da4ad0ffa7938 > > ...which seems to try to do the same thing, but detected the auth mds > change in a different way. > > Is that where livelock detection was broken? Or was there some > corresponding change to __choose_mds that should prevent infinitely > looping on the same request? > > In NFS, ESTALE errors mean that the filehandle (inode) no longer exists > and that the server has forgotten about it. Does it mean the same thing > to the ceph MDS? This used to get returned if the MDS couldn't find the inode number in question, because . This was not possible in most cases because if the client has caps on the inode, it's pinned in MDS cache, but was possible when NFS was layered on top (and possibly some other edge case APIs where clients can operate on inode numbers they've saved from a previous lookup?). > > Has the behavior of the MDS changed such that these retries are no > longer necessary on an ESTALE? If so, when did this change, and does the > client need to do anything to detect what behavior it should be using? Well, I see that CEPHFS_ESTALE is still returned sometimes from the MDS, so somebody will need to audit those, but the MDS has definitely changed. These days, we can look up an unknown inode using the (directory) backtrace we store on its first RADOS object, and it does (at least some of the time? But I think everywhere relevant). But that happened when we first added scrub circa 2014ish? Previously if the inode wasn't in cache, we just had no way of finding it. -Greg
On Mon, Feb 7, 2022 at 8:13 AM Jeff Layton <jlayton@kernel.org> wrote: > The tracker bug mentions that this occurs after an MDS is restarted. > Could this be the result of clients relying on delete-on-last-close > behavior? Oooh, I didn't actually look at the tracker. > > IOW, we have a situation where a file is opened and then unlinked, and > userland is actively doing I/O to it. The thing gets moved into the > strays dir, but isn't unlinked yet because we have open files against > it. Everything works fine at this point... > > Then, the MDS restarts and the inode gets purged altogether. Client > reconnects and tries to reclaim his open, and gets ESTALE. Uh, okay. So I didn't do a proper audit before I sent my previous reply, but one of the cases I did see was that the MDS returns ESTALE if you try to do a name lookup on an inode in the stray directory. I don't know if that's what is happening here or not? But perhaps that's the root of the problem in this case. Oh, nope, I see it's issuing getattr requests. That doesn't do ESTALE directly so it must indeed be coming out of MDCache::path_traverse. The MDS shouldn't move an inode into the purge queue on restart unless there were no clients with caps on it (that state is persisted to disk so it knows). Maybe if the clients don't make the reconnect window it's dropping them all and *then* moves it into purge queue? I think we need to identify what's happening there before we issue kernel client changes, Xiubo? -Greg
On 2/7/22 11:12 PM, Jeff Layton wrote: > On Mon, 2022-02-07 at 13:03 +0800, xiubli@redhat.com wrote: >> From: Xiubo Li <xiubli@redhat.com> >> >> If MDS return ESTALE, that means the MDS has already iterated all >> the possible active MDSes including the auth MDS or the inode is >> under purging. No need to retry in auth MDS and will just return >> ESTALE directly. >> > When you say "purging" here, do you mean that it's effectively being > cleaned up after being unlinked? Or is it just being purged from the > MDS's cache? There is one case when the client just removes the file or the file is force overrode via renaming, the related dentry in MDS will be put to stray dir and the Inode will be marked as under purging. So in case when the client try to call getattr, for example, after that, or retries the pending getattr request, which can cause the MDS returns ESTALE errno in theory. Locally I still haven't reproduced it yet. >> Or it will cause definite loop for retrying it. >> >> URL: https://tracker.ceph.com/issues/53504 >> Signed-off-by: Xiubo Li <xiubli@redhat.com> >> --- >> fs/ceph/mds_client.c | 29 ----------------------------- >> 1 file changed, 29 deletions(-) >> >> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c >> index 93e5e3c4ba64..c918d2ac8272 100644 >> --- a/fs/ceph/mds_client.c >> +++ b/fs/ceph/mds_client.c >> @@ -3368,35 +3368,6 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) >> >> result = le32_to_cpu(head->result); >> >> - /* >> - * Handle an ESTALE >> - * if we're not talking to the authority, send to them >> - * if the authority has changed while we weren't looking, >> - * send to new authority >> - * Otherwise we just have to return an ESTALE >> - */ >> - if (result == -ESTALE) { >> - dout("got ESTALE on request %llu\n", req->r_tid); >> - req->r_resend_mds = -1; >> - if (req->r_direct_mode != USE_AUTH_MDS) { >> - dout("not using auth, setting for that now\n"); >> - req->r_direct_mode = USE_AUTH_MDS; >> - __do_request(mdsc, req); >> - mutex_unlock(&mdsc->mutex); >> - goto out; >> - } else { >> - int mds = __choose_mds(mdsc, req, NULL); >> - if (mds >= 0 && mds != req->r_session->s_mds) { >> - dout("but auth changed, so resending\n"); >> - __do_request(mdsc, req); >> - mutex_unlock(&mdsc->mutex); >> - goto out; >> - } >> - } >> - dout("have to return ESTALE on request %llu\n", req->r_tid); >> - } >> - >> - >> if (head->safe) { >> set_bit(CEPH_MDS_R_GOT_SAFE, &req->r_req_flags); >> __unregister_request(mdsc, req); > > (cc'ing Greg, Sage and Zheng) > > This patch sort of contradicts the original design, AFAICT, and I'm not > sure what the correct behavior should be. I could use some > clarification. > > The original code (from the 2009 merge) would tolerate 2 ESTALEs before > giving up and returning that to userland. Then in 2010, Greg added this > commit: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e55b71f802fd448a79275ba7b263fe1a8639be5f > > ...which would presumably make it retry indefinitely as long as the auth > MDS kept changing. Then, Zheng made this change in 2013: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ca18bede048e95a749d13410ce1da4ad0ffa7938 > > ...which seems to try to do the same thing, but detected the auth mds > change in a different way. > > Is that where livelock detection was broken? Or was there some > corresponding change to __choose_mds that should prevent infinitely > looping on the same request? > > In NFS, ESTALE errors mean that the filehandle (inode) no longer exists > and that the server has forgotten about it. Does it mean the same thing > to the ceph MDS? > > Has the behavior of the MDS changed such that these retries are no > longer necessary on an ESTALE? If so, when did this change, and does the > client need to do anything to detect what behavior it should be using?
On 2/8/22 1:11 AM, Jeff Layton wrote: > On Mon, 2022-02-07 at 08:28 -0800, Gregory Farnum wrote: >> On Mon, Feb 7, 2022 at 8:13 AM Jeff Layton <jlayton@kernel.org> wrote: >>> The tracker bug mentions that this occurs after an MDS is restarted. >>> Could this be the result of clients relying on delete-on-last-close >>> behavior? >> Oooh, I didn't actually look at the tracker. >> >>> IOW, we have a situation where a file is opened and then unlinked, and >>> userland is actively doing I/O to it. The thing gets moved into the >>> strays dir, but isn't unlinked yet because we have open files against >>> it. Everything works fine at this point... >>> >>> Then, the MDS restarts and the inode gets purged altogether. Client >>> reconnects and tries to reclaim his open, and gets ESTALE. >> Uh, okay. So I didn't do a proper audit before I sent my previous >> reply, but one of the cases I did see was that the MDS returns ESTALE >> if you try to do a name lookup on an inode in the stray directory. I >> don't know if that's what is happening here or not? But perhaps that's >> the root of the problem in this case. >> >> Oh, nope, I see it's issuing getattr requests. That doesn't do ESTALE >> directly so it must indeed be coming out of MDCache::path_traverse. >> >> The MDS shouldn't move an inode into the purge queue on restart unless >> there were no clients with caps on it (that state is persisted to disk >> so it knows). Maybe if the clients don't make the reconnect window >> it's dropping them all and *then* moves it into purge queue? I think >> we need to identify what's happening there before we issue kernel >> client changes, Xiubo? > > Agreed. I think we need to understand why he's seeing ESTALE errors in > the first place, but it sounds like retrying on an ESTALE error isn't > likely to be helpful. There has one case that could cause the inode to be put into the purge queue: 1, When unlinking a file and just after the unlink journal log is flushed and the MDS is restart or replaced by a standby MDS. The unlink journal log will contain the a straydn and the straydn will link to the related CInode. 2, The new starting MDS will replay this unlink journal log in up:standby_replay state. 3, The MDCache::upkeep_main() thread will try to trim MDCache, and it will possibly trim the straydn. Since the clients haven't reconnected the sessions, so the CInode won't have any client cap. So when trimming the straydn and CInode, the CInode will be put into the purge queue. 4, After up:reconnect, when retrying the getattr requests the MDS will return ESTALE. This should be fixed in https://github.com/ceph/ceph/pull/41667 recently, it will just enables trim() in up:active state. I also went through the ESTALE related code in MDS, this patch still makes sense and when getting an ESTALE errno to retry the request make no sense. BRs Xiubo
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 93e5e3c4ba64..c918d2ac8272 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -3368,35 +3368,6 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) result = le32_to_cpu(head->result); - /* - * Handle an ESTALE - * if we're not talking to the authority, send to them - * if the authority has changed while we weren't looking, - * send to new authority - * Otherwise we just have to return an ESTALE - */ - if (result == -ESTALE) { - dout("got ESTALE on request %llu\n", req->r_tid); - req->r_resend_mds = -1; - if (req->r_direct_mode != USE_AUTH_MDS) { - dout("not using auth, setting for that now\n"); - req->r_direct_mode = USE_AUTH_MDS; - __do_request(mdsc, req); - mutex_unlock(&mdsc->mutex); - goto out; - } else { - int mds = __choose_mds(mdsc, req, NULL); - if (mds >= 0 && mds != req->r_session->s_mds) { - dout("but auth changed, so resending\n"); - __do_request(mdsc, req); - mutex_unlock(&mdsc->mutex); - goto out; - } - } - dout("have to return ESTALE on request %llu\n", req->r_tid); - } - - if (head->safe) { set_bit(CEPH_MDS_R_GOT_SAFE, &req->r_req_flags); __unregister_request(mdsc, req);