diff mbox series

ceph: correctly release memory from capsnap

Message ID 20210817075816.190025-1-xiubli@redhat.com
State New
Headers show
Series ceph: correctly release memory from capsnap | expand

Commit Message

Xiubo Li Aug. 17, 2021, 7:58 a.m. UTC
From: Xiubo Li <xiubli@redhat.com>

When force umounting, it will try to remove all the session caps.
If there has any capsnap is in the flushing list, the remove session
caps callback will try to release the capsnap->flush_cap memory to
"ceph_cap_flush_cachep" slab cache, while which is allocated from
kmalloc-256 slab cache.

URL: https://tracker.ceph.com/issues/52283
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/mds_client.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Ilya Dryomov Aug. 17, 2021, 10:46 a.m. UTC | #1
On Tue, Aug 17, 2021 at 9:58 AM <xiubli@redhat.com> wrote:
>
> From: Xiubo Li <xiubli@redhat.com>
>
> When force umounting, it will try to remove all the session caps.
> If there has any capsnap is in the flushing list, the remove session
> caps callback will try to release the capsnap->flush_cap memory to
> "ceph_cap_flush_cachep" slab cache, while which is allocated from
> kmalloc-256 slab cache.
>
> URL: https://tracker.ceph.com/issues/52283
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/mds_client.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 00b3b0a0beb8..cb517753bb17 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -1264,10 +1264,18 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
>         spin_unlock(&ci->i_ceph_lock);
>         while (!list_empty(&to_remove)) {
>                 struct ceph_cap_flush *cf;
> +               struct ceph_cap_snap *capsnap;

Hi Xiubo,

Add an empty line here.

>                 cf = list_first_entry(&to_remove,
>                                       struct ceph_cap_flush, i_list);
>                 list_del(&cf->i_list);
> -               ceph_free_cap_flush(cf);
> +               if (cf->caps) {
> +                       ceph_free_cap_flush(cf);
> +               } else if (READ_ONCE(fsc->mount_state) == CEPH_MOUNT_SHUTDOWN) {

What does this condition guard against?  Are there other cases of
ceph_cap_flush being embedded that need to be handled differently
on !SHUTDOWN?

Should capsnaps be on to_remove list in the first place?

This sounds like stable material to me.

Thanks,

                Ilya
diff mbox series

Patch

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 00b3b0a0beb8..cb517753bb17 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -1264,10 +1264,18 @@  static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
 	spin_unlock(&ci->i_ceph_lock);
 	while (!list_empty(&to_remove)) {
 		struct ceph_cap_flush *cf;
+		struct ceph_cap_snap *capsnap;
 		cf = list_first_entry(&to_remove,
 				      struct ceph_cap_flush, i_list);
 		list_del(&cf->i_list);
-		ceph_free_cap_flush(cf);
+		if (cf->caps) {
+			ceph_free_cap_flush(cf);
+		} else if (READ_ONCE(fsc->mount_state) == CEPH_MOUNT_SHUTDOWN) {
+			capsnap = container_of(cf, struct ceph_cap_snap, cap_flush);
+			list_del(&capsnap->ci_item);
+			ceph_put_snap_context(capsnap->context);
+			ceph_put_cap_snap(capsnap);
+		}
 	}
 
 	wake_up_all(&ci->i_cap_wq);