mbox series

[v7,00/11] ceph: support idmapped mounts

Message ID 20230726141026.307690-1-aleksandr.mikhalitsyn@canonical.com
Headers show
Series ceph: support idmapped mounts | expand

Message

Aleksandr Mikhalitsyn July 26, 2023, 2:10 p.m. UTC
Dear friends,

This patchset was originally developed by Christian Brauner but I'll continue
to push it forward. Christian allowed me to do that :)

This feature is already actively used/tested with LXD/LXC project.

Git tree (based on https://github.com/ceph/ceph-client.git testing):
v7: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v7
current: https://github.com/mihalicyn/linux/tree/fs.idmapped.ceph

In the version 3 I've changed only two commits:
- fs: export mnt_idmap_get/mnt_idmap_put
- ceph: allow idmapped setattr inode op
and added a new one:
- ceph: pass idmap to __ceph_setattr

In the version 4 I've reworked the ("ceph: stash idmapping in mdsc request")
commit. Now we take idmap refcounter just in place where req->r_mnt_idmap
is filled. It's more safer approach and prevents possible refcounter underflow
on error paths where __register_request wasn't called but ceph_mdsc_release_request is
called.

Changelog for version 5:
- a few commits were squashed into one (as suggested by Xiubo Li)
- started passing an idmapping everywhere (if possible), so a caller
UID/GID-s will be mapped almost everywhere (as suggested by Xiubo Li)

Changelog for version 6:
- rebased on top of testing branch
- passed an idmapping in a few places (readdir, ceph_netfs_issue_op_inline)

Changelog for version 7:
- rebased on top of testing branch
- this thing now requires a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
https://github.com/ceph/ceph/pull/52575

I can confirm that this version passes xfstests.

Links to previous versions:
v1: https://lore.kernel.org/all/20220104140414.155198-1-brauner@kernel.org/
v2: https://lore.kernel.org/lkml/20230524153316.476973-1-aleksandr.mikhalitsyn@canonical.com/
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v2
v3: https://lore.kernel.org/lkml/20230607152038.469739-1-aleksandr.mikhalitsyn@canonical.com/#t
v4: https://lore.kernel.org/lkml/20230607180958.645115-1-aleksandr.mikhalitsyn@canonical.com/#t
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v4
v5: https://lore.kernel.org/lkml/20230608154256.562906-1-aleksandr.mikhalitsyn@canonical.com/#t
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v5
v6: https://lore.kernel.org/lkml/20230609093125.252186-1-aleksandr.mikhalitsyn@canonical.com/
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v6

Kind regards,
Alex

Original description from Christian:
========================================================================
This patch series enables cephfs to support idmapped mounts, i.e. the
ability to alter ownership information on a per-mount basis.

Container managers such as LXD support sharaing data via cephfs between
the host and unprivileged containers and between unprivileged containers.
They may all use different idmappings. Idmapped mounts can be used to
create mounts with the idmapping used for the container (or a different
one specific to the use-case).

There are in fact more use-cases such as remapping ownership for
mountpoints on the host itself to grant or restrict access to different
users or to make it possible to enforce that programs running as root
will write with a non-zero {g,u}id to disk.

The patch series is simple overall and few changes are needed to cephfs.
There is one cephfs specific issue that I would like to discuss and
solve which I explain in detail in:

[PATCH 02/12] ceph: handle idmapped mounts in create_request_message()

It has to do with how to handle mds serves which have id-based access
restrictions configured. I would ask you to please take a look at the
explanation in the aforementioned patch.

The patch series passes the vfs and idmapped mount testsuite as part of
xfstests. To run it you will need a config like:

[ceph]
export FSTYP=ceph
export TEST_DIR=/mnt/test
export TEST_DEV=10.103.182.10:6789:/
export TEST_FS_MOUNT_OPTS="-o name=admin,secret=$password

and then simply call

sudo ./check -g idmapped

========================================================================

Alexander Mikhalitsyn (3):
  fs: export mnt_idmap_get/mnt_idmap_put
  ceph: handle idmapped mounts in create_request_message()
  ceph: pass idmap to __ceph_setattr

Christian Brauner (8):
  ceph: stash idmapping in mdsc request
  ceph: pass an idmapping to mknod/symlink/mkdir
  ceph: allow idmapped getattr inode op
  ceph: allow idmapped permission inode op
  ceph: allow idmapped setattr inode op
  ceph/acl: allow idmapped set_acl inode op
  ceph/file: allow idmapped atomic_open inode op
  ceph: allow idmapped mounts

 fs/ceph/acl.c                 |  6 +++---
 fs/ceph/crypto.c              |  2 +-
 fs/ceph/dir.c                 |  3 +++
 fs/ceph/file.c                | 10 ++++++++--
 fs/ceph/inode.c               | 29 +++++++++++++++++------------
 fs/ceph/mds_client.c          | 25 +++++++++++++++++++++++++
 fs/ceph/mds_client.h          |  6 +++++-
 fs/ceph/super.c               |  2 +-
 fs/ceph/super.h               |  3 ++-
 fs/mnt_idmapping.c            |  2 ++
 include/linux/ceph/ceph_fs.h  |  4 +++-
 include/linux/mnt_idmapping.h |  3 +++
 12 files changed, 73 insertions(+), 22 deletions(-)

Comments

Aleksandr Mikhalitsyn July 27, 2023, 6:36 a.m. UTC | #1
On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
> > Inode operations that create a new filesystem object such as ->mknod,
> > ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> > Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> > filesystem object.
> >
> > In order to ensure that the correct {g,u}id is used map the caller's
> > fs{g,u}id for creation requests. This doesn't require complex changes.
> > It suffices to pass in the relevant idmapping recorded in the request
> > message. If this request message was triggered from an inode operation
> > that creates filesystem objects it will have passed down the relevant
> > idmaping. If this is a request message that was triggered from an inode
> > operation that doens't need to take idmappings into account the initial
> > idmapping is passed down which is an identity mapping.
> >
> > This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
> > which adds two new fields (owner_{u,g}id) to the request head structure.
> > So, we need to ensure that MDS supports it otherwise we need to fail
> > any IO that comes through an idmapped mount because we can't process it
> > in a proper way. MDS server without such an extension will use caller_{u,g}id
> > fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
> > values are unmapped. At the same time we can't map these fields with an
> > idmapping as it can break UID/GID-based permission checks logic on the
> > MDS side. This problem was described with a lot of details at [1], [2].
> >
> > [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> > [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> >
> > Cc: Xiubo Li <xiubli@redhat.com>
> > Cc: Jeff Layton <jlayton@kernel.org>
> > Cc: Ilya Dryomov <idryomov@gmail.com>
> > Cc: ceph-devel@vger.kernel.org
> > Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
> > Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > ---
> > v7:
> >       - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
> > ---
> >   fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
> >   fs/ceph/mds_client.h         |  5 ++++-
> >   include/linux/ceph/ceph_fs.h |  4 +++-
> >   3 files changed, 27 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > index c641ab046e98..ac095a95f3d0 100644
> > --- a/fs/ceph/mds_client.c
> > +++ b/fs/ceph/mds_client.c
> > @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> >   {
> >       int mds = session->s_mds;
> >       struct ceph_mds_client *mdsc = session->s_mdsc;
> > +     struct ceph_client *cl = mdsc->fsc->client;
> >       struct ceph_msg *msg;
> >       struct ceph_mds_request_head_legacy *lhead;
> >       const char *path1 = NULL;
> > @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> >       lhead = find_legacy_request_head(msg->front.iov_base,
> >                                        session->s_con.peer_features);
> >
> > +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> > +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
> > +             pr_err_ratelimited_client(cl,
> > +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> > +                     " is not supported by MDS. Fail request with -EIO.\n");
> > +
> > +             ret = -EIO;
> > +             goto out_err;
> > +     }
> > +
>
> I think this couldn't fail the mounting operation, right ?

This won't fail mounting. First of all an idmapped mount is always an
additional mount, you always
start from doing "normal" mount and only after that you can use this
mount to create an idmapped one.
( example: https://github.com/brauner/mount-idmapped/tree/master )

>
> IMO we should fail the mounting from the beginning.

Unfortunately, we can't fail mount from the beginning. Procedure of
the idmapped mounts
creation is handled not on the filesystem level, but on the VFS level
(source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
)

Kernel perform all required checks as:
- filesystem type has declared to support idmappings
(fs_type->fs_flags & FS_ALLOW_IDMAP)
- user who creates idmapped mount should be CAP_SYS_ADMIN in a user
namespace that owns superblock of the filesystem
(for cephfs it's always init_user_ns => user should be root on the host)

So I would like to go this way because of the reasons mentioned above:
- root user is someone who understands what he does.
- idmapped mounts are never "first" mounts. They are always created
after "normal" mount.
- effectively this check makes "normal" mount to work normally and
fail only requests that comes through an idmapped mounts
with reasonable error message. Obviously, all read operations will
work perfectly well only the operations that create new inodes will
fail.
Btw, we already have an analogical semantic on the VFS level for users
who have no UID/GID mapping to the host. Filesystem requests for
such users will fail with -EOVERFLOW. Here we have something close.

I think we can take a look at this in the future when some other
filesystem will require the same feature of
checking idmapped mounts creation on the filesystem level. (We can
introduce some extra callback on the superblock
level or something like that.) But I think that it makes sense to do
that when cephfs will be allowed to be mounted in
the user namespace.

I hope that Christian Brauner will add something here. :-)

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
>
> >       /*
> >        * The ceph_mds_request_head_legacy didn't contain a version field, and
> >        * one was added when we moved the message version from 3->4.
> > @@ -3043,10 +3054,19 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> >               p = msg->front.iov_base + sizeof(*ohead);
> >       } else {
> >               struct ceph_mds_request_head *nhead = msg->front.iov_base;
> > +             kuid_t owner_fsuid;
> > +             kgid_t owner_fsgid;
> >
> >               msg->hdr.version = cpu_to_le16(6);
> >               nhead->version = cpu_to_le16(CEPH_MDS_REQUEST_HEAD_VERSION);
> >               p = msg->front.iov_base + sizeof(*nhead);
> > +
> > +             owner_fsuid = from_vfsuid(req->r_mnt_idmap, &init_user_ns,
> > +                                       VFSUIDT_INIT(req->r_cred->fsuid));
> > +             owner_fsgid = from_vfsgid(req->r_mnt_idmap, &init_user_ns,
> > +                                       VFSGIDT_INIT(req->r_cred->fsgid));
> > +             nhead->owner_uid = cpu_to_le32(from_kuid(&init_user_ns, owner_fsuid));
> > +             nhead->owner_gid = cpu_to_le32(from_kgid(&init_user_ns, owner_fsgid));
> >       }
> >
> >       end = msg->front.iov_base + msg->front.iov_len;
> > diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
> > index e3bbf3ba8ee8..8f683e8203bd 100644
> > --- a/fs/ceph/mds_client.h
> > +++ b/fs/ceph/mds_client.h
> > @@ -33,8 +33,10 @@ enum ceph_feature_type {
> >       CEPHFS_FEATURE_NOTIFY_SESSION_STATE,
> >       CEPHFS_FEATURE_OP_GETVXATTR,
> >       CEPHFS_FEATURE_32BITS_RETRY_FWD,
> > +     CEPHFS_FEATURE_NEW_SNAPREALM_INFO,
> > +     CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >
> > -     CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_32BITS_RETRY_FWD,
> > +     CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >   };
> >
> >   #define CEPHFS_FEATURES_CLIENT_SUPPORTED {  \
> > @@ -49,6 +51,7 @@ enum ceph_feature_type {
> >       CEPHFS_FEATURE_NOTIFY_SESSION_STATE,    \
> >       CEPHFS_FEATURE_OP_GETVXATTR,            \
> >       CEPHFS_FEATURE_32BITS_RETRY_FWD,        \
> > +     CEPHFS_FEATURE_HAS_OWNER_UIDGID,        \
> >   }
> >
> >   /*
> > diff --git a/include/linux/ceph/ceph_fs.h b/include/linux/ceph/ceph_fs.h
> > index 5f2301ee88bc..6eb83a51341c 100644
> > --- a/include/linux/ceph/ceph_fs.h
> > +++ b/include/linux/ceph/ceph_fs.h
> > @@ -499,7 +499,7 @@ struct ceph_mds_request_head_legacy {
> >       union ceph_mds_request_args args;
> >   } __attribute__ ((packed));
> >
> > -#define CEPH_MDS_REQUEST_HEAD_VERSION  2
> > +#define CEPH_MDS_REQUEST_HEAD_VERSION  3
> >
> >   struct ceph_mds_request_head_old {
> >       __le16 version;                /* struct version */
> > @@ -530,6 +530,8 @@ struct ceph_mds_request_head {
> >
> >       __le32 ext_num_retry;          /* new count retry attempts */
> >       __le32 ext_num_fwd;            /* new count fwd attempts */
> > +
> > +     __le32 owner_uid, owner_gid;   /* used for OPs which create inodes */
> >   } __attribute__ ((packed));
> >
> >   /* cap/lease release record */
>
Christian Brauner July 27, 2023, 9:01 a.m. UTC | #2
On Thu, Jul 27, 2023 at 08:36:40AM +0200, Aleksandr Mikhalitsyn wrote:
> On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
> > > Inode operations that create a new filesystem object such as ->mknod,
> > > ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> > > Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> > > filesystem object.
> > >
> > > In order to ensure that the correct {g,u}id is used map the caller's
> > > fs{g,u}id for creation requests. This doesn't require complex changes.
> > > It suffices to pass in the relevant idmapping recorded in the request
> > > message. If this request message was triggered from an inode operation
> > > that creates filesystem objects it will have passed down the relevant
> > > idmaping. If this is a request message that was triggered from an inode
> > > operation that doens't need to take idmappings into account the initial
> > > idmapping is passed down which is an identity mapping.
> > >
> > > This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
> > > which adds two new fields (owner_{u,g}id) to the request head structure.
> > > So, we need to ensure that MDS supports it otherwise we need to fail
> > > any IO that comes through an idmapped mount because we can't process it
> > > in a proper way. MDS server without such an extension will use caller_{u,g}id
> > > fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
> > > values are unmapped. At the same time we can't map these fields with an
> > > idmapping as it can break UID/GID-based permission checks logic on the
> > > MDS side. This problem was described with a lot of details at [1], [2].
> > >
> > > [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> > > [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> > >
> > > Cc: Xiubo Li <xiubli@redhat.com>
> > > Cc: Jeff Layton <jlayton@kernel.org>
> > > Cc: Ilya Dryomov <idryomov@gmail.com>
> > > Cc: ceph-devel@vger.kernel.org
> > > Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
> > > Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > ---
> > > v7:
> > >       - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
> > > ---
> > >   fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
> > >   fs/ceph/mds_client.h         |  5 ++++-
> > >   include/linux/ceph/ceph_fs.h |  4 +++-
> > >   3 files changed, 27 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > index c641ab046e98..ac095a95f3d0 100644
> > > --- a/fs/ceph/mds_client.c
> > > +++ b/fs/ceph/mds_client.c
> > > @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > >   {
> > >       int mds = session->s_mds;
> > >       struct ceph_mds_client *mdsc = session->s_mdsc;
> > > +     struct ceph_client *cl = mdsc->fsc->client;
> > >       struct ceph_msg *msg;
> > >       struct ceph_mds_request_head_legacy *lhead;
> > >       const char *path1 = NULL;
> > > @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > >       lhead = find_legacy_request_head(msg->front.iov_base,
> > >                                        session->s_con.peer_features);
> > >
> > > +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> > > +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
> > > +             pr_err_ratelimited_client(cl,
> > > +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> > > +                     " is not supported by MDS. Fail request with -EIO.\n");
> > > +
> > > +             ret = -EIO;
> > > +             goto out_err;
> > > +     }
> > > +
> >
> > I think this couldn't fail the mounting operation, right ?
> 
> This won't fail mounting. First of all an idmapped mount is always an
> additional mount, you always
> start from doing "normal" mount and only after that you can use this
> mount to create an idmapped one.
> ( example: https://github.com/brauner/mount-idmapped/tree/master )
> 
> >
> > IMO we should fail the mounting from the beginning.
> 
> Unfortunately, we can't fail mount from the beginning. Procedure of
> the idmapped mounts
> creation is handled not on the filesystem level, but on the VFS level

Correct. It's a generic vfsmount feature.

> (source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
> )
> 
> Kernel perform all required checks as:
> - filesystem type has declared to support idmappings
> (fs_type->fs_flags & FS_ALLOW_IDMAP)
> - user who creates idmapped mount should be CAP_SYS_ADMIN in a user
> namespace that owns superblock of the filesystem
> (for cephfs it's always init_user_ns => user should be root on the host)
> 
> So I would like to go this way because of the reasons mentioned above:
> - root user is someone who understands what he does.
> - idmapped mounts are never "first" mounts. They are always created
> after "normal" mount.
> - effectively this check makes "normal" mount to work normally and
> fail only requests that comes through an idmapped mounts
> with reasonable error message. Obviously, all read operations will
> work perfectly well only the operations that create new inodes will
> fail.
> Btw, we already have an analogical semantic on the VFS level for users
> who have no UID/GID mapping to the host. Filesystem requests for
> such users will fail with -EOVERFLOW. Here we have something close.

Refusing requests coming from an idmapped mount if the server misses
appropriate features is good enough as a first step imho. And yes, we do
have similar logic on the vfs level for unmapped uid/gid.
Aleksandr Mikhalitsyn July 27, 2023, 9:48 a.m. UTC | #3
On Thu, Jul 27, 2023 at 11:01 AM Christian Brauner <brauner@kernel.org> wrote:
>
> On Thu, Jul 27, 2023 at 08:36:40AM +0200, Aleksandr Mikhalitsyn wrote:
> > On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >
> > >
> > > On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
> > > > Inode operations that create a new filesystem object such as ->mknod,
> > > > ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> > > > Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> > > > filesystem object.
> > > >
> > > > In order to ensure that the correct {g,u}id is used map the caller's
> > > > fs{g,u}id for creation requests. This doesn't require complex changes.
> > > > It suffices to pass in the relevant idmapping recorded in the request
> > > > message. If this request message was triggered from an inode operation
> > > > that creates filesystem objects it will have passed down the relevant
> > > > idmaping. If this is a request message that was triggered from an inode
> > > > operation that doens't need to take idmappings into account the initial
> > > > idmapping is passed down which is an identity mapping.
> > > >
> > > > This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
> > > > which adds two new fields (owner_{u,g}id) to the request head structure.
> > > > So, we need to ensure that MDS supports it otherwise we need to fail
> > > > any IO that comes through an idmapped mount because we can't process it
> > > > in a proper way. MDS server without such an extension will use caller_{u,g}id
> > > > fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
> > > > values are unmapped. At the same time we can't map these fields with an
> > > > idmapping as it can break UID/GID-based permission checks logic on the
> > > > MDS side. This problem was described with a lot of details at [1], [2].
> > > >
> > > > [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> > > > [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> > > >
> > > > Cc: Xiubo Li <xiubli@redhat.com>
> > > > Cc: Jeff Layton <jlayton@kernel.org>
> > > > Cc: Ilya Dryomov <idryomov@gmail.com>
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
> > > > Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > ---
> > > > v7:
> > > >       - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
> > > > ---
> > > >   fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
> > > >   fs/ceph/mds_client.h         |  5 ++++-
> > > >   include/linux/ceph/ceph_fs.h |  4 +++-
> > > >   3 files changed, 27 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > > index c641ab046e98..ac095a95f3d0 100644
> > > > --- a/fs/ceph/mds_client.c
> > > > +++ b/fs/ceph/mds_client.c
> > > > @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > >   {
> > > >       int mds = session->s_mds;
> > > >       struct ceph_mds_client *mdsc = session->s_mdsc;
> > > > +     struct ceph_client *cl = mdsc->fsc->client;
> > > >       struct ceph_msg *msg;
> > > >       struct ceph_mds_request_head_legacy *lhead;
> > > >       const char *path1 = NULL;
> > > > @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > >       lhead = find_legacy_request_head(msg->front.iov_base,
> > > >                                        session->s_con.peer_features);
> > > >
> > > > +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> > > > +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
> > > > +             pr_err_ratelimited_client(cl,
> > > > +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> > > > +                     " is not supported by MDS. Fail request with -EIO.\n");
> > > > +
> > > > +             ret = -EIO;
> > > > +             goto out_err;
> > > > +     }
> > > > +
> > >
> > > I think this couldn't fail the mounting operation, right ?
> >
> > This won't fail mounting. First of all an idmapped mount is always an
> > additional mount, you always
> > start from doing "normal" mount and only after that you can use this
> > mount to create an idmapped one.
> > ( example: https://github.com/brauner/mount-idmapped/tree/master )
> >
> > >
> > > IMO we should fail the mounting from the beginning.
> >
> > Unfortunately, we can't fail mount from the beginning. Procedure of
> > the idmapped mounts
> > creation is handled not on the filesystem level, but on the VFS level
>
> Correct. It's a generic vfsmount feature.
>
> > (source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
> > )
> >
> > Kernel perform all required checks as:
> > - filesystem type has declared to support idmappings
> > (fs_type->fs_flags & FS_ALLOW_IDMAP)
> > - user who creates idmapped mount should be CAP_SYS_ADMIN in a user
> > namespace that owns superblock of the filesystem
> > (for cephfs it's always init_user_ns => user should be root on the host)
> >
> > So I would like to go this way because of the reasons mentioned above:
> > - root user is someone who understands what he does.
> > - idmapped mounts are never "first" mounts. They are always created
> > after "normal" mount.
> > - effectively this check makes "normal" mount to work normally and
> > fail only requests that comes through an idmapped mounts
> > with reasonable error message. Obviously, all read operations will
> > work perfectly well only the operations that create new inodes will
> > fail.
> > Btw, we already have an analogical semantic on the VFS level for users
> > who have no UID/GID mapping to the host. Filesystem requests for
> > such users will fail with -EOVERFLOW. Here we have something close.
>
> Refusing requests coming from an idmapped mount if the server misses
> appropriate features is good enough as a first step imho. And yes, we do
> have similar logic on the vfs level for unmapped uid/gid.

Thanks, Christian!

I wanted to add that alternative here is to modify caller_{u,g}id
fields as it was done in the first approach,
it will break the UID/GID-based permissions model for old MDS versions
(we can put printk_once to inform user about this),
but at the same time it will allow us to support idmapped mounts in
all cases. This support will be not fully ideal for old MDS
 and perfectly well for new MDS versions.

Alternatively, we can introduce cephfs mount option like
"idmap_with_old_mds" and if it's enabled then we set caller_{u,g}id
for MDS without CEPHFS_FEATURE_HAS_OWNER_UIDGID, if it's disabled
(default) we fail requests with -EIO. For
new MDS everything goes in the right way.

Kind regards,
Alex
Stéphane Graber July 27, 2023, 2:46 p.m. UTC | #4
On Thu, Jul 27, 2023 at 5:48 AM Aleksandr Mikhalitsyn
<aleksandr.mikhalitsyn@canonical.com> wrote:
>
> On Thu, Jul 27, 2023 at 11:01 AM Christian Brauner <brauner@kernel.org> wrote:
> >
> > On Thu, Jul 27, 2023 at 08:36:40AM +0200, Aleksandr Mikhalitsyn wrote:
> > > On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
> > > >
> > > >
> > > > On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
> > > > > Inode operations that create a new filesystem object such as ->mknod,
> > > > > ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> > > > > Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> > > > > filesystem object.
> > > > >
> > > > > In order to ensure that the correct {g,u}id is used map the caller's
> > > > > fs{g,u}id for creation requests. This doesn't require complex changes.
> > > > > It suffices to pass in the relevant idmapping recorded in the request
> > > > > message. If this request message was triggered from an inode operation
> > > > > that creates filesystem objects it will have passed down the relevant
> > > > > idmaping. If this is a request message that was triggered from an inode
> > > > > operation that doens't need to take idmappings into account the initial
> > > > > idmapping is passed down which is an identity mapping.
> > > > >
> > > > > This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
> > > > > which adds two new fields (owner_{u,g}id) to the request head structure.
> > > > > So, we need to ensure that MDS supports it otherwise we need to fail
> > > > > any IO that comes through an idmapped mount because we can't process it
> > > > > in a proper way. MDS server without such an extension will use caller_{u,g}id
> > > > > fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
> > > > > values are unmapped. At the same time we can't map these fields with an
> > > > > idmapping as it can break UID/GID-based permission checks logic on the
> > > > > MDS side. This problem was described with a lot of details at [1], [2].
> > > > >
> > > > > [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> > > > > [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> > > > >
> > > > > Cc: Xiubo Li <xiubli@redhat.com>
> > > > > Cc: Jeff Layton <jlayton@kernel.org>
> > > > > Cc: Ilya Dryomov <idryomov@gmail.com>
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > > Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
> > > > > Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > > ---
> > > > > v7:
> > > > >       - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
> > > > > ---
> > > > >   fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
> > > > >   fs/ceph/mds_client.h         |  5 ++++-
> > > > >   include/linux/ceph/ceph_fs.h |  4 +++-
> > > > >   3 files changed, 27 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > > > index c641ab046e98..ac095a95f3d0 100644
> > > > > --- a/fs/ceph/mds_client.c
> > > > > +++ b/fs/ceph/mds_client.c
> > > > > @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > > >   {
> > > > >       int mds = session->s_mds;
> > > > >       struct ceph_mds_client *mdsc = session->s_mdsc;
> > > > > +     struct ceph_client *cl = mdsc->fsc->client;
> > > > >       struct ceph_msg *msg;
> > > > >       struct ceph_mds_request_head_legacy *lhead;
> > > > >       const char *path1 = NULL;
> > > > > @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > > >       lhead = find_legacy_request_head(msg->front.iov_base,
> > > > >                                        session->s_con.peer_features);
> > > > >
> > > > > +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> > > > > +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
> > > > > +             pr_err_ratelimited_client(cl,
> > > > > +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> > > > > +                     " is not supported by MDS. Fail request with -EIO.\n");
> > > > > +
> > > > > +             ret = -EIO;
> > > > > +             goto out_err;
> > > > > +     }
> > > > > +
> > > >
> > > > I think this couldn't fail the mounting operation, right ?
> > >
> > > This won't fail mounting. First of all an idmapped mount is always an
> > > additional mount, you always
> > > start from doing "normal" mount and only after that you can use this
> > > mount to create an idmapped one.
> > > ( example: https://github.com/brauner/mount-idmapped/tree/master )
> > >
> > > >
> > > > IMO we should fail the mounting from the beginning.
> > >
> > > Unfortunately, we can't fail mount from the beginning. Procedure of
> > > the idmapped mounts
> > > creation is handled not on the filesystem level, but on the VFS level
> >
> > Correct. It's a generic vfsmount feature.
> >
> > > (source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
> > > )
> > >
> > > Kernel perform all required checks as:
> > > - filesystem type has declared to support idmappings
> > > (fs_type->fs_flags & FS_ALLOW_IDMAP)
> > > - user who creates idmapped mount should be CAP_SYS_ADMIN in a user
> > > namespace that owns superblock of the filesystem
> > > (for cephfs it's always init_user_ns => user should be root on the host)
> > >
> > > So I would like to go this way because of the reasons mentioned above:
> > > - root user is someone who understands what he does.
> > > - idmapped mounts are never "first" mounts. They are always created
> > > after "normal" mount.
> > > - effectively this check makes "normal" mount to work normally and
> > > fail only requests that comes through an idmapped mounts
> > > with reasonable error message. Obviously, all read operations will
> > > work perfectly well only the operations that create new inodes will
> > > fail.
> > > Btw, we already have an analogical semantic on the VFS level for users
> > > who have no UID/GID mapping to the host. Filesystem requests for
> > > such users will fail with -EOVERFLOW. Here we have something close.
> >
> > Refusing requests coming from an idmapped mount if the server misses
> > appropriate features is good enough as a first step imho. And yes, we do
> > have similar logic on the vfs level for unmapped uid/gid.
>
> Thanks, Christian!
>
> I wanted to add that alternative here is to modify caller_{u,g}id
> fields as it was done in the first approach,
> it will break the UID/GID-based permissions model for old MDS versions
> (we can put printk_once to inform user about this),
> but at the same time it will allow us to support idmapped mounts in
> all cases. This support will be not fully ideal for old MDS
>  and perfectly well for new MDS versions.
>
> Alternatively, we can introduce cephfs mount option like
> "idmap_with_old_mds" and if it's enabled then we set caller_{u,g}id
> for MDS without CEPHFS_FEATURE_HAS_OWNER_UIDGID, if it's disabled
> (default) we fail requests with -EIO. For
> new MDS everything goes in the right way.
>
> Kind regards,
> Alex

Hey there,

A very strong +1 on there needing to be some way to make this work
with older Ceph releases.
Ceph Reef isn't out yet and we're in July 2023, so I'd really like not
having to wait until Ceph Squid in mid 2024 to be able to make use of
this!

Some kind of mount option, module option or the like would all be fine for this.

Stéphane
Aleksandr Mikhalitsyn July 27, 2023, 3 p.m. UTC | #5
On Thu, Jul 27, 2023 at 4:46 PM Stéphane Graber <stgraber@ubuntu.com> wrote:
>
> On Thu, Jul 27, 2023 at 5:48 AM Aleksandr Mikhalitsyn
> <aleksandr.mikhalitsyn@canonical.com> wrote:
> >
> > On Thu, Jul 27, 2023 at 11:01 AM Christian Brauner <brauner@kernel.org> wrote:
> > >
> > > On Thu, Jul 27, 2023 at 08:36:40AM +0200, Aleksandr Mikhalitsyn wrote:
> > > > On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
> > > > >
> > > > >
> > > > > On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
> > > > > > Inode operations that create a new filesystem object such as ->mknod,
> > > > > > ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> > > > > > Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> > > > > > filesystem object.
> > > > > >
> > > > > > In order to ensure that the correct {g,u}id is used map the caller's
> > > > > > fs{g,u}id for creation requests. This doesn't require complex changes.
> > > > > > It suffices to pass in the relevant idmapping recorded in the request
> > > > > > message. If this request message was triggered from an inode operation
> > > > > > that creates filesystem objects it will have passed down the relevant
> > > > > > idmaping. If this is a request message that was triggered from an inode
> > > > > > operation that doens't need to take idmappings into account the initial
> > > > > > idmapping is passed down which is an identity mapping.
> > > > > >
> > > > > > This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
> > > > > > which adds two new fields (owner_{u,g}id) to the request head structure.
> > > > > > So, we need to ensure that MDS supports it otherwise we need to fail
> > > > > > any IO that comes through an idmapped mount because we can't process it
> > > > > > in a proper way. MDS server without such an extension will use caller_{u,g}id
> > > > > > fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
> > > > > > values are unmapped. At the same time we can't map these fields with an
> > > > > > idmapping as it can break UID/GID-based permission checks logic on the
> > > > > > MDS side. This problem was described with a lot of details at [1], [2].
> > > > > >
> > > > > > [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> > > > > > [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> > > > > >
> > > > > > Cc: Xiubo Li <xiubli@redhat.com>
> > > > > > Cc: Jeff Layton <jlayton@kernel.org>
> > > > > > Cc: Ilya Dryomov <idryomov@gmail.com>
> > > > > > Cc: ceph-devel@vger.kernel.org
> > > > > > Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > > > Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
> > > > > > Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> > > > > > ---
> > > > > > v7:
> > > > > >       - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
> > > > > > ---
> > > > > >   fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
> > > > > >   fs/ceph/mds_client.h         |  5 ++++-
> > > > > >   include/linux/ceph/ceph_fs.h |  4 +++-
> > > > > >   3 files changed, 27 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > > > > index c641ab046e98..ac095a95f3d0 100644
> > > > > > --- a/fs/ceph/mds_client.c
> > > > > > +++ b/fs/ceph/mds_client.c
> > > > > > @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > > > >   {
> > > > > >       int mds = session->s_mds;
> > > > > >       struct ceph_mds_client *mdsc = session->s_mdsc;
> > > > > > +     struct ceph_client *cl = mdsc->fsc->client;
> > > > > >       struct ceph_msg *msg;
> > > > > >       struct ceph_mds_request_head_legacy *lhead;
> > > > > >       const char *path1 = NULL;
> > > > > > @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
> > > > > >       lhead = find_legacy_request_head(msg->front.iov_base,
> > > > > >                                        session->s_con.peer_features);
> > > > > >
> > > > > > +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> > > > > > +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
> > > > > > +             pr_err_ratelimited_client(cl,
> > > > > > +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> > > > > > +                     " is not supported by MDS. Fail request with -EIO.\n");
> > > > > > +
> > > > > > +             ret = -EIO;
> > > > > > +             goto out_err;
> > > > > > +     }
> > > > > > +
> > > > >
> > > > > I think this couldn't fail the mounting operation, right ?
> > > >
> > > > This won't fail mounting. First of all an idmapped mount is always an
> > > > additional mount, you always
> > > > start from doing "normal" mount and only after that you can use this
> > > > mount to create an idmapped one.
> > > > ( example: https://github.com/brauner/mount-idmapped/tree/master )
> > > >
> > > > >
> > > > > IMO we should fail the mounting from the beginning.
> > > >
> > > > Unfortunately, we can't fail mount from the beginning. Procedure of
> > > > the idmapped mounts
> > > > creation is handled not on the filesystem level, but on the VFS level
> > >
> > > Correct. It's a generic vfsmount feature.
> > >
> > > > (source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
> > > > )
> > > >
> > > > Kernel perform all required checks as:
> > > > - filesystem type has declared to support idmappings
> > > > (fs_type->fs_flags & FS_ALLOW_IDMAP)
> > > > - user who creates idmapped mount should be CAP_SYS_ADMIN in a user
> > > > namespace that owns superblock of the filesystem
> > > > (for cephfs it's always init_user_ns => user should be root on the host)
> > > >
> > > > So I would like to go this way because of the reasons mentioned above:
> > > > - root user is someone who understands what he does.
> > > > - idmapped mounts are never "first" mounts. They are always created
> > > > after "normal" mount.
> > > > - effectively this check makes "normal" mount to work normally and
> > > > fail only requests that comes through an idmapped mounts
> > > > with reasonable error message. Obviously, all read operations will
> > > > work perfectly well only the operations that create new inodes will
> > > > fail.
> > > > Btw, we already have an analogical semantic on the VFS level for users
> > > > who have no UID/GID mapping to the host. Filesystem requests for
> > > > such users will fail with -EOVERFLOW. Here we have something close.
> > >
> > > Refusing requests coming from an idmapped mount if the server misses
> > > appropriate features is good enough as a first step imho. And yes, we do
> > > have similar logic on the vfs level for unmapped uid/gid.
> >
> > Thanks, Christian!
> >
> > I wanted to add that alternative here is to modify caller_{u,g}id
> > fields as it was done in the first approach,
> > it will break the UID/GID-based permissions model for old MDS versions
> > (we can put printk_once to inform user about this),
> > but at the same time it will allow us to support idmapped mounts in
> > all cases. This support will be not fully ideal for old MDS
> >  and perfectly well for new MDS versions.
> >
> > Alternatively, we can introduce cephfs mount option like
> > "idmap_with_old_mds" and if it's enabled then we set caller_{u,g}id
> > for MDS without CEPHFS_FEATURE_HAS_OWNER_UIDGID, if it's disabled
> > (default) we fail requests with -EIO. For
> > new MDS everything goes in the right way.
> >
> > Kind regards,
> > Alex
>
> Hey there,
>
> A very strong +1 on there needing to be some way to make this work
> with older Ceph releases.
> Ceph Reef isn't out yet and we're in July 2023, so I'd really like not
> having to wait until Ceph Squid in mid 2024 to be able to make use of
> this!
>
> Some kind of mount option, module option or the like would all be fine for this.

I really like this way. I can implement it really quickly. Let's just
agree on this :)
It looks like an ideal solution for everyone.

Kind regards,
Alex

>
> Stéphane
Xiubo Li July 28, 2023, 10:12 a.m. UTC | #6
On 7/27/23 22:46, Stéphane Graber wrote:
> On Thu, Jul 27, 2023 at 5:48 AM Aleksandr Mikhalitsyn
> <aleksandr.mikhalitsyn@canonical.com> wrote:
>> On Thu, Jul 27, 2023 at 11:01 AM Christian Brauner <brauner@kernel.org> wrote:
>>> On Thu, Jul 27, 2023 at 08:36:40AM +0200, Aleksandr Mikhalitsyn wrote:
>>>> On Thu, Jul 27, 2023 at 7:30 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>>>
>>>>> On 7/26/23 22:10, Alexander Mikhalitsyn wrote:
>>>>>> Inode operations that create a new filesystem object such as ->mknod,
>>>>>> ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
>>>>>> Instead the caller's fs{g,u}id is used for the {g,u}id of the new
>>>>>> filesystem object.
>>>>>>
>>>>>> In order to ensure that the correct {g,u}id is used map the caller's
>>>>>> fs{g,u}id for creation requests. This doesn't require complex changes.
>>>>>> It suffices to pass in the relevant idmapping recorded in the request
>>>>>> message. If this request message was triggered from an inode operation
>>>>>> that creates filesystem objects it will have passed down the relevant
>>>>>> idmaping. If this is a request message that was triggered from an inode
>>>>>> operation that doens't need to take idmappings into account the initial
>>>>>> idmapping is passed down which is an identity mapping.
>>>>>>
>>>>>> This change uses a new cephfs protocol extension CEPHFS_FEATURE_HAS_OWNER_UIDGID
>>>>>> which adds two new fields (owner_{u,g}id) to the request head structure.
>>>>>> So, we need to ensure that MDS supports it otherwise we need to fail
>>>>>> any IO that comes through an idmapped mount because we can't process it
>>>>>> in a proper way. MDS server without such an extension will use caller_{u,g}id
>>>>>> fields to set a new inode owner UID/GID which is incorrect because caller_{u,g}id
>>>>>> values are unmapped. At the same time we can't map these fields with an
>>>>>> idmapping as it can break UID/GID-based permission checks logic on the
>>>>>> MDS side. This problem was described with a lot of details at [1], [2].
>>>>>>
>>>>>> [1] https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
>>>>>> [2] https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
>>>>>>
>>>>>> Cc: Xiubo Li <xiubli@redhat.com>
>>>>>> Cc: Jeff Layton <jlayton@kernel.org>
>>>>>> Cc: Ilya Dryomov <idryomov@gmail.com>
>>>>>> Cc: ceph-devel@vger.kernel.org
>>>>>> Co-Developed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
>>>>>> Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
>>>>>> Signed-off-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
>>>>>> ---
>>>>>> v7:
>>>>>>        - reworked to use two new fields for owner UID/GID (https://github.com/ceph/ceph/pull/52575)
>>>>>> ---
>>>>>>    fs/ceph/mds_client.c         | 20 ++++++++++++++++++++
>>>>>>    fs/ceph/mds_client.h         |  5 ++++-
>>>>>>    include/linux/ceph/ceph_fs.h |  4 +++-
>>>>>>    3 files changed, 27 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>>>>>> index c641ab046e98..ac095a95f3d0 100644
>>>>>> --- a/fs/ceph/mds_client.c
>>>>>> +++ b/fs/ceph/mds_client.c
>>>>>> @@ -2923,6 +2923,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
>>>>>>    {
>>>>>>        int mds = session->s_mds;
>>>>>>        struct ceph_mds_client *mdsc = session->s_mdsc;
>>>>>> +     struct ceph_client *cl = mdsc->fsc->client;
>>>>>>        struct ceph_msg *msg;
>>>>>>        struct ceph_mds_request_head_legacy *lhead;
>>>>>>        const char *path1 = NULL;
>>>>>> @@ -3028,6 +3029,16 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
>>>>>>        lhead = find_legacy_request_head(msg->front.iov_base,
>>>>>>                                         session->s_con.peer_features);
>>>>>>
>>>>>> +     if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
>>>>>> +         !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID, &session->s_features)) {
>>>>>> +             pr_err_ratelimited_client(cl,
>>>>>> +                     "idmapped mount is used and CEPHFS_FEATURE_HAS_OWNER_UIDGID"
>>>>>> +                     " is not supported by MDS. Fail request with -EIO.\n");
>>>>>> +
>>>>>> +             ret = -EIO;
>>>>>> +             goto out_err;
>>>>>> +     }
>>>>>> +
>>>>> I think this couldn't fail the mounting operation, right ?
>>>> This won't fail mounting. First of all an idmapped mount is always an
>>>> additional mount, you always
>>>> start from doing "normal" mount and only after that you can use this
>>>> mount to create an idmapped one.
>>>> ( example: https://github.com/brauner/mount-idmapped/tree/master )
>>>>
>>>>> IMO we should fail the mounting from the beginning.
>>>> Unfortunately, we can't fail mount from the beginning. Procedure of
>>>> the idmapped mounts
>>>> creation is handled not on the filesystem level, but on the VFS level
>>> Correct. It's a generic vfsmount feature.
>>>
>>>> (source: https://github.com/torvalds/linux/blob/0a8db05b571ad5b8d5c8774a004c0424260a90bd/fs/namespace.c#L4277
>>>> )
>>>>
>>>> Kernel perform all required checks as:
>>>> - filesystem type has declared to support idmappings
>>>> (fs_type->fs_flags & FS_ALLOW_IDMAP)
>>>> - user who creates idmapped mount should be CAP_SYS_ADMIN in a user
>>>> namespace that owns superblock of the filesystem
>>>> (for cephfs it's always init_user_ns => user should be root on the host)
>>>>
>>>> So I would like to go this way because of the reasons mentioned above:
>>>> - root user is someone who understands what he does.
>>>> - idmapped mounts are never "first" mounts. They are always created
>>>> after "normal" mount.
>>>> - effectively this check makes "normal" mount to work normally and
>>>> fail only requests that comes through an idmapped mounts
>>>> with reasonable error message. Obviously, all read operations will
>>>> work perfectly well only the operations that create new inodes will
>>>> fail.
>>>> Btw, we already have an analogical semantic on the VFS level for users
>>>> who have no UID/GID mapping to the host. Filesystem requests for
>>>> such users will fail with -EOVERFLOW. Here we have something close.
>>> Refusing requests coming from an idmapped mount if the server misses
>>> appropriate features is good enough as a first step imho. And yes, we do
>>> have similar logic on the vfs level for unmapped uid/gid.
>> Thanks, Christian!
>>
>> I wanted to add that alternative here is to modify caller_{u,g}id
>> fields as it was done in the first approach,
>> it will break the UID/GID-based permissions model for old MDS versions
>> (we can put printk_once to inform user about this),
>> but at the same time it will allow us to support idmapped mounts in
>> all cases. This support will be not fully ideal for old MDS
>>   and perfectly well for new MDS versions.
>>
>> Alternatively, we can introduce cephfs mount option like
>> "idmap_with_old_mds" and if it's enabled then we set caller_{u,g}id
>> for MDS without CEPHFS_FEATURE_HAS_OWNER_UIDGID, if it's disabled
>> (default) we fail requests with -EIO. For
>> new MDS everything goes in the right way.
>>
>> Kind regards,
>> Alex
> Hey there,
>
> A very strong +1 on there needing to be some way to make this work
> with older Ceph releases.
> Ceph Reef isn't out yet and we're in July 2023, so I'd really like not
> having to wait until Ceph Squid in mid 2024 to be able to make use of
> this!

IMO this shouldn't be an issue, because we can backport it to old releases.

Thanks

- Xiubo

>
> Some kind of mount option, module option or the like would all be fine for this.
>
> Stéphane
>