diff mbox series

ceph: flush cap release on session flush

Message ID 20230207050452.403436-1-xiubli@redhat.com
State New
Headers show
Series ceph: flush cap release on session flush | expand

Commit Message

Xiubo Li Feb. 7, 2023, 5:04 a.m. UTC
From: Xiubo Li <xiubli@redhat.com>

MDS expects the completed cap release prior to responding to the
session flush for cache drop.

Cc: <stable@kernel.org>
URL: http://tracker.ceph.com/issues/38009
Cc: Patrick Donnelly <pdonnell@redhat.com>
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/mds_client.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Venky Shankar Feb. 7, 2023, 5:16 a.m. UTC | #1
On Tue, Feb 7, 2023 at 10:35 AM <xiubli@redhat.com> wrote:
>
> From: Xiubo Li <xiubli@redhat.com>
>
> MDS expects the completed cap release prior to responding to the
> session flush for cache drop.
>
> Cc: <stable@kernel.org>
> URL: http://tracker.ceph.com/issues/38009
> Cc: Patrick Donnelly <pdonnell@redhat.com>
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/mds_client.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 3c9d3f609e7f..51366bd053de 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -4039,6 +4039,12 @@ static void handle_session(struct ceph_mds_session *session,
>                 break;
>
>         case CEPH_SESSION_FLUSHMSG:
> +               /* flush cap release */
> +               spin_lock(&session->s_cap_lock);
> +               if (session->s_num_cap_releases)
> +                       ceph_flush_cap_releases(mdsc, session);
> +               spin_unlock(&session->s_cap_lock);
> +
>                 send_flushmsg_ack(mdsc, session, seq);
>                 break;

Ugh. kclient never flushed cap releases o_O

LGTM.

Reviewed-by: Venky Shankar <vshankar@redhat.com>

>
> --
> 2.31.1
>
Xiubo Li Feb. 7, 2023, 5:19 a.m. UTC | #2
On 07/02/2023 13:16, Venky Shankar wrote:
> On Tue, Feb 7, 2023 at 10:35 AM <xiubli@redhat.com> wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> MDS expects the completed cap release prior to responding to the
>> session flush for cache drop.
>>
>> Cc: <stable@kernel.org>
>> URL: http://tracker.ceph.com/issues/38009
>> Cc: Patrick Donnelly <pdonnell@redhat.com>
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/mds_client.c | 6 ++++++
>>   1 file changed, 6 insertions(+)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index 3c9d3f609e7f..51366bd053de 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -4039,6 +4039,12 @@ static void handle_session(struct ceph_mds_session *session,
>>                  break;
>>
>>          case CEPH_SESSION_FLUSHMSG:
>> +               /* flush cap release */
>> +               spin_lock(&session->s_cap_lock);
>> +               if (session->s_num_cap_releases)
>> +                       ceph_flush_cap_releases(mdsc, session);
>> +               spin_unlock(&session->s_cap_lock);
>> +
>>                  send_flushmsg_ack(mdsc, session, seq);
>>                  break;
> Ugh. kclient never flushed cap releases o_O

Yeah, I think this was missed before.

> LGTM.
>
> Reviewed-by: Venky Shankar <vshankar@redhat.com>

Thanks Venky.

>> --
>> 2.31.1
>>
>
Jeff Layton Feb. 7, 2023, 12:48 p.m. UTC | #3
On Tue, 2023-02-07 at 13:04 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> MDS expects the completed cap release prior to responding to the
> session flush for cache drop.
> 
> Cc: <stable@kernel.org>
> URL: http://tracker.ceph.com/issues/38009
> Cc: Patrick Donnelly <pdonnell@redhat.com>
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/mds_client.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 3c9d3f609e7f..51366bd053de 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -4039,6 +4039,12 @@ static void handle_session(struct ceph_mds_session *session,
>  		break;
>  
>  	case CEPH_SESSION_FLUSHMSG:
> +		/* flush cap release */
> +		spin_lock(&session->s_cap_lock);
> +		if (session->s_num_cap_releases)
> +			ceph_flush_cap_releases(mdsc, session);
> +		spin_unlock(&session->s_cap_lock);
> +
>  		send_flushmsg_ack(mdsc, session, seq);
>  		break;
>  

Ouch! Good catch!

Reviewed-by: Jeff Layton <jlayton@kernel.org>
Ilya Dryomov Feb. 7, 2023, 4:03 p.m. UTC | #4
On Tue, Feb 7, 2023 at 6:19 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 07/02/2023 13:16, Venky Shankar wrote:
> > On Tue, Feb 7, 2023 at 10:35 AM <xiubli@redhat.com> wrote:
> >> From: Xiubo Li <xiubli@redhat.com>
> >>
> >> MDS expects the completed cap release prior to responding to the
> >> session flush for cache drop.
> >>
> >> Cc: <stable@kernel.org>
> >> URL: http://tracker.ceph.com/issues/38009
> >> Cc: Patrick Donnelly <pdonnell@redhat.com>
> >> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> >> ---
> >>   fs/ceph/mds_client.c | 6 ++++++
> >>   1 file changed, 6 insertions(+)
> >>
> >> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> >> index 3c9d3f609e7f..51366bd053de 100644
> >> --- a/fs/ceph/mds_client.c
> >> +++ b/fs/ceph/mds_client.c
> >> @@ -4039,6 +4039,12 @@ static void handle_session(struct ceph_mds_session *session,
> >>                  break;
> >>
> >>          case CEPH_SESSION_FLUSHMSG:
> >> +               /* flush cap release */
> >> +               spin_lock(&session->s_cap_lock);
> >> +               if (session->s_num_cap_releases)
> >> +                       ceph_flush_cap_releases(mdsc, session);
> >> +               spin_unlock(&session->s_cap_lock);
> >> +
> >>                  send_flushmsg_ack(mdsc, session, seq);
> >>                  break;
> > Ugh. kclient never flushed cap releases o_O
>
> Yeah, I think this was missed before.

Now queued up for 6.2-rc8.

Thanks,

                Ilya
diff mbox series

Patch

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 3c9d3f609e7f..51366bd053de 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -4039,6 +4039,12 @@  static void handle_session(struct ceph_mds_session *session,
 		break;
 
 	case CEPH_SESSION_FLUSHMSG:
+		/* flush cap release */
+		spin_lock(&session->s_cap_lock);
+		if (session->s_num_cap_releases)
+			ceph_flush_cap_releases(mdsc, session);
+		spin_unlock(&session->s_cap_lock);
+
 		send_flushmsg_ack(mdsc, session, seq);
 		break;