From patchwork Thu Nov 9 08:24:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 742863 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1763B10956 for ; Thu, 9 Nov 2023 08:26:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LSmWCUJ8" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98AB1D4A for ; Thu, 9 Nov 2023 00:26:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699518388; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0+yyYtlOAR/sdN8rSfVVC5TN1W9y5IMF7wzfkhtAkLo=; b=LSmWCUJ8CtM6QZOgU3BfyKu9avTkbVGrGOPrWRa3L1a6xR6fbVH+0RcsFUmOMor9qygtcq VJZ2gN9T7fSC8y27CgUfo37NvodhokfbBy9SunAKjJYHb6Me1NGhDmAu+Q/Rb658oBHAmE knIdbR/PNsWeQfcTR/EHrnT2ILGrzmU= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-76-ms1L9zNOOWam6wUn6pJ0dQ-1; Thu, 09 Nov 2023 03:26:27 -0500 X-MC-Unique: ms1L9zNOOWam6wUn6pJ0dQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F2E531C0518D; Thu, 9 Nov 2023 08:26:26 +0000 (UTC) Received: from li-a71a4dcc-35d1-11b2-a85c-951838863c8d.ibm.com.com (unknown [10.72.112.221]) by smtp.corp.redhat.com (Postfix) with ESMTP id B899F2026D68; Thu, 9 Nov 2023 08:26:23 +0000 (UTC) From: xiubli@redhat.com To: ceph-devel@vger.kernel.org Cc: idryomov@gmail.com, jlayton@kernel.org, vshankar@redhat.com, mchangir@redhat.com, Xiubo Li Subject: [PATCH 1/5] ceph: save the cap_auths in client when session being opened Date: Thu, 9 Nov 2023 16:24:05 +0800 Message-ID: <20231109082409.417726-2-xiubli@redhat.com> In-Reply-To: <20231109082409.417726-1-xiubli@redhat.com> References: <20231109082409.417726-1-xiubli@redhat.com> Precedence: bulk X-Mailing-List: ceph-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Xiubo Li Save the cap_auths, which have been parsed by the MDS, in the opened session. URL: https://tracker.ceph.com/issues/61333 Signed-off-by: Xiubo Li --- fs/ceph/mds_client.c | 108 ++++++++++++++++++++++++++++++++++++++++++- fs/ceph/mds_client.h | 21 +++++++++ 2 files changed, 128 insertions(+), 1 deletion(-) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index bb5252f5944a..3fb0b0104f6b 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -4113,10 +4113,13 @@ static void handle_session(struct ceph_mds_session *session, void *p = msg->front.iov_base; void *end = p + msg->front.iov_len; struct ceph_mds_session_head *h; - u32 op; + struct ceph_mds_cap_auth *cap_auths = NULL; + u32 op, cap_auths_num = 0; u64 seq, features = 0; int wake = 0; bool blocklisted = false; + u32 i; + /* decode */ ceph_decode_need(&p, end, sizeof(*h), bad); @@ -4161,7 +4164,103 @@ static void handle_session(struct ceph_mds_session *session, } } + if (msg_version >= 6) { + ceph_decode_32_safe(&p, end, cap_auths_num, bad); + doutc(cl, "cap_auths_num %d\n", cap_auths_num); + + if (cap_auths_num && op != CEPH_SESSION_OPEN) { + WARN_ON_ONCE(op != CEPH_SESSION_OPEN); + goto skip_cap_auths; + } + + cap_auths = kcalloc(cap_auths_num, + sizeof(struct ceph_mds_cap_auth), + GFP_KERNEL); + if (!cap_auths) { + pr_err_client(cl, "No memory for cap_auths\n"); + return; + } + + for (i = 0; i < cap_auths_num; i++) { + u32 _len, j; + + /* struct_v, struct_compat, and struct_len in MDSCapAuth */ + ceph_decode_skip_n(&p, end, 2 + sizeof(u32), bad); + + /* struct_v, struct_compat, and struct_len in MDSCapMatch */ + ceph_decode_skip_n(&p, end, 2 + sizeof(u32), bad); + ceph_decode_64_safe(&p, end, cap_auths[i].match.uid, bad); + ceph_decode_32_safe(&p, end, _len, bad); + if (_len) { + cap_auths[i].match.gids = kcalloc(_len, sizeof(int64_t), + GFP_KERNEL); + if (!cap_auths[i].match.gids) { + pr_err_client(cl, "No memory for gids\n"); + goto fail; + } + + cap_auths[i].match.num_gids = _len; + for (j = 0; j < _len; j++) + ceph_decode_64_safe(&p, end, + cap_auths[i].match.gids[j], + bad); + } + + ceph_decode_32_safe(&p, end, _len, bad); + if (_len) { + cap_auths[i].match.path = kcalloc(_len + 1, sizeof(char), + GFP_KERNEL); + if (!cap_auths[i].match.path) { + pr_err_client(cl, "No memory for path\n"); + goto fail; + } + ceph_decode_copy(&p, cap_auths[i].match.path, _len); + + /* Remove the tailing '/' */ + while (_len && cap_auths[i].match.path[_len - 1] == '/') { + cap_auths[i].match.path[_len - 1] = '\0'; + _len -= 1; + } + } + + ceph_decode_32_safe(&p, end, _len, bad); + if (_len) { + cap_auths[i].match.fs_name = kcalloc(_len + 1, sizeof(char), + GFP_KERNEL); + if (!cap_auths[i].match.fs_name) { + pr_err_client(cl, "No memory for fs_name\n"); + goto fail; + } + ceph_decode_copy(&p, cap_auths[i].match.fs_name, _len); + } + + ceph_decode_8_safe(&p, end, cap_auths[i].match.root_squash, bad); + ceph_decode_8_safe(&p, end, cap_auths[i].readable, bad); + ceph_decode_8_safe(&p, end, cap_auths[i].writeable, bad); + doutc(cl, "uid %lld, num_gids %d, path %s, fs_name %s," + " root_squash %d, readable %d, writeable %d\n", + cap_auths[i].match.uid, cap_auths[i].match.num_gids, + cap_auths[i].match.path, cap_auths[i].match.fs_name, + cap_auths[i].match.root_squash, + cap_auths[i].readable, cap_auths[i].writeable); + } + + } + +skip_cap_auths: mutex_lock(&mdsc->mutex); + if (op == CEPH_SESSION_OPEN) { + if (mdsc->s_cap_auths) { + for (i = 0; i < mdsc->s_cap_auths_num; i++) { + kfree(mdsc->s_cap_auths[i].match.gids); + kfree(mdsc->s_cap_auths[i].match.path); + kfree(mdsc->s_cap_auths[i].match.fs_name); + } + kfree(mdsc->s_cap_auths); + } + mdsc->s_cap_auths_num = cap_auths_num; + mdsc->s_cap_auths = cap_auths; + } if (op == CEPH_SESSION_CLOSE) { ceph_get_mds_session(session); __unregister_session(mdsc, session); @@ -4291,6 +4390,13 @@ static void handle_session(struct ceph_mds_session *session, pr_err_client(cl, "corrupt message mds%d len %d\n", mds, (int)msg->front.iov_len); ceph_msg_dump(msg); +fail: + for (i = 0; i < cap_auths_num; i++) { + kfree(cap_auths[i].match.gids); + kfree(cap_auths[i].match.path); + kfree(cap_auths[i].match.fs_name); + } + kfree(cap_auths); return; } diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h index f25117fa910f..71f4d1ff663f 100644 --- a/fs/ceph/mds_client.h +++ b/fs/ceph/mds_client.h @@ -71,6 +71,24 @@ enum ceph_feature_type { struct ceph_fs_client; struct ceph_cap; +#define MDS_AUTH_UID_ANY -1 + +struct ceph_mds_cap_match { + int64_t uid; // MDS_AUTH_UID_ANY as default + uint32_t num_gids; + int64_t *gids; // Use these GIDs + char *path; // Require path to be child of this (may be "" or "/" for any) + char *fs_name; + u8 root_squash; // False as default +}; + +struct ceph_mds_cap_auth { + struct ceph_mds_cap_match match; + u8 readable; + u8 writeable; +}; + + /* * parsed info about a single inode. pointers are into the encoded * on-wire structures within the mds reply message payload. @@ -514,6 +532,9 @@ struct ceph_mds_client { struct rw_semaphore pool_perm_rwsem; struct rb_root pool_perm_tree; + u32 s_cap_auths_num; + struct ceph_mds_cap_auth *s_cap_auths; + char nodename[__NEW_UTS_LEN + 1]; }; From patchwork Thu Nov 9 08:24:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 742609 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACA3410944 for ; Thu, 9 Nov 2023 08:26:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MUCwncYR" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F7172D54 for ; Thu, 9 Nov 2023 00:26:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699518394; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ufz1DwaJQSy9hRp77vH/sm1eeC4wXuDHE8/agg23RJY=; b=MUCwncYRVahbdsllfwLDVu4+4uWZbvbGNcduNOSS6ggFGZYLZhW8yvVtwVuGtwQ8h9w89G YHfnDGdZTF615uiI3sD+2yjXEYntlB1hVZsLrumVPaU53ZiROW0JqCHff+JyhAVRA9o3Y4 vQkTrzcEWN9JHM4EU8FPxXSQ2jjFWJE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-696-4M5rbbJ6PhebfpPnXVvD3w-1; Thu, 09 Nov 2023 03:26:31 -0500 X-MC-Unique: 4M5rbbJ6PhebfpPnXVvD3w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16E553C025B3; Thu, 9 Nov 2023 08:26:31 +0000 (UTC) Received: from li-a71a4dcc-35d1-11b2-a85c-951838863c8d.ibm.com.com (unknown [10.72.112.221]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5BCC2026D68; Thu, 9 Nov 2023 08:26:27 +0000 (UTC) From: xiubli@redhat.com To: ceph-devel@vger.kernel.org Cc: idryomov@gmail.com, jlayton@kernel.org, vshankar@redhat.com, mchangir@redhat.com, Xiubo Li Subject: [PATCH 2/5] ceph: add ceph_mds_check_access() helper support Date: Thu, 9 Nov 2023 16:24:06 +0800 Message-ID: <20231109082409.417726-3-xiubli@redhat.com> In-Reply-To: <20231109082409.417726-1-xiubli@redhat.com> References: <20231109082409.417726-1-xiubli@redhat.com> Precedence: bulk X-Mailing-List: ceph-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Xiubo Li This will help check the mds auth access in client side. Always insert the server path in front of the target path when matching the paths. URL: https://tracker.ceph.com/issues/61333 Signed-off-by: Xiubo Li --- fs/ceph/mds_client.c | 157 +++++++++++++++++++++++++++++++++++++++++++ fs/ceph/mds_client.h | 3 + 2 files changed, 160 insertions(+) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 3fb0b0104f6b..158225259e00 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -5604,6 +5604,163 @@ void send_flush_mdlog(struct ceph_mds_session *s) mutex_unlock(&s->s_mutex); } +static int ceph_mds_auth_match(struct ceph_mds_client *mdsc, + struct ceph_mds_cap_auth *auth, + char *tpath) +{ + const struct cred *cred = get_current_cred(); + uint32_t caller_uid = from_kuid(&init_user_ns, cred->fsuid); + uint32_t caller_gid = from_kgid(&init_user_ns, cred->fsgid); + struct ceph_client *cl = mdsc->fsc->client; + const char *path = mdsc->fsc->mount_options->server_path; + bool gid_matched = false; + uint32_t gid, tlen, len; + int i, j; + + doutc(cl, "match.uid %lld\n", auth->match.uid); + if (auth->match.uid != MDS_AUTH_UID_ANY) { + if (auth->match.uid != caller_uid) + return 0; + if (auth->match.num_gids) { + for (i = 0; i < auth->match.num_gids; i++) { + if (caller_gid == auth->match.gids[i]) + gid_matched = true; + } + if (!gid_matched && cred->group_info->ngroups) { + for (i = 0; i < cred->group_info->ngroups; i++) { + gid = from_kgid(&init_user_ns, cred->group_info->gid[i]); + for (j = 0; j < auth->match.num_gids; j++) { + if (gid == auth->match.gids[j]) { + gid_matched = true; + break; + } + } + if (gid_matched) + break; + } + } + if (!gid_matched) + return 0; + } + } + + /* path match */ + if (auth->match.path) { + if (!tpath) + return 0; + + tlen = strlen(tpath); + len = strlen(auth->match.path); + if (len) { + char *_tpath = tpath; + bool free_tpath = false; + int m, n; + + doutc(cl, "server path %s, tpath %s, match.path %s\n", + path, tpath, auth->match.path); + if (path && (m = strlen(path)) != 1) { + /* mount path + '/' + tpath + an extra space */ + n = m + 1 + tlen + 1; + _tpath = kmalloc(n, GFP_NOFS); + if (!_tpath) + return -ENOMEM; + /* remove the leading '/' */ + snprintf(_tpath, n, "%s/%s", path + 1, tpath); + free_tpath = true; + tlen = strlen(_tpath); + } + + /* Remove the tailing '/' */ + while (tlen && _tpath[tlen - 1] == '/') { + _tpath[tlen - 1] = '\0'; + tlen -= 1; + } + doutc(cl, "_tpath %s\n", _tpath); + + /* In case first == _tpath && tlen == len: + * path=/foo --> /foo target_path=/foo --> match + * path=/foo/ --> /foo target_path=/foo/ --> match + * path=/foo/ --> /foo target_path=/foo --> match + * + * In case first == _tpath && tlen > len: + * path=/foo --> /foo target_path=/foo/ --> match + * path=/foo/ --> /foo target_path=/foo/d --> match + * path=/foo --> /foo target_path=/food --> mismatch + * + * All the other cases --> mismatch + */ + char *first = strstr(_tpath, auth->match.path); + if (first != _tpath) { + if (free_tpath) + kfree(_tpath); + return 0; + } + + if (tlen > len && _tpath[len] != '/') { + if (free_tpath) + kfree(_tpath); + return 0; + } + } + } + + doutc(cl, "matched\n"); + return 1; +} + +int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, int mask) +{ + const struct cred *cred = get_current_cred(); + uint32_t caller_uid = from_kuid(&init_user_ns, cred->fsuid); + uint32_t caller_gid = from_kgid(&init_user_ns, cred->fsgid); + struct ceph_mds_cap_auth *rw_perms_s = NULL; + struct ceph_client *cl = mdsc->fsc->client; + bool root_squash_perms = true; + int i, err; + + doutc(cl, "tpath '%s', mask %d, caller_uid %d, caller_gid %d\n", + tpath, mask, caller_uid, caller_gid); + + for (i = 0; i < mdsc->s_cap_auths_num; i++) { + struct ceph_mds_cap_auth *s = &mdsc->s_cap_auths[i]; + + err = ceph_mds_auth_match(mdsc, s, tpath); + if (err < 0) { + return err; + } else if (err > 0) { + // always follow the last auth caps' permision + root_squash_perms = true; + rw_perms_s = NULL; + if ((mask & MAY_WRITE) && s->writeable && + s->match.root_squash && (!caller_uid || !caller_gid)) + root_squash_perms = false; + + if (((mask & MAY_WRITE) && !s->writeable) || + ((mask & MAY_READ) && !s->readable)) + rw_perms_s = s; + } + } + + doutc(cl, "root_squash_perms %d, rw_perms_s %p\n", root_squash_perms, + rw_perms_s); + if (root_squash_perms && rw_perms_s == NULL) { + doutc(cl, "access allowed\n"); + return 0; + } + + if (!root_squash_perms) { + doutc(cl, "root_squash is enabled and user(%d %d) isn't allowed to write", + caller_uid, caller_gid); + } + if (rw_perms_s) { + doutc(cl, "mds auth caps readable/writeable %d/%d while request r/w %d/%d", + rw_perms_s->readable, rw_perms_s->writeable, !!(mask & MAY_READ), + !!(mask & MAY_WRITE)); + } + doutc(cl, "access denied\n"); + return -EACCES; +} + /* * called before mount is ro, and before dentries are torn down. * (hmm, does this still race with new lookups?) diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h index 71f4d1ff663f..8be57267c253 100644 --- a/fs/ceph/mds_client.h +++ b/fs/ceph/mds_client.h @@ -602,6 +602,9 @@ extern void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr); extern int ceph_iterate_session_caps(struct ceph_mds_session *session, int (*cb)(struct inode *, int mds, void *), void *arg); +extern int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, + int mask); + extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc); static inline void ceph_mdsc_free_path(char *path, int len) From patchwork Thu Nov 9 08:24:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 742862 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A98D21095B for ; Thu, 9 Nov 2023 08:26:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="THuNNSs/" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F6E52D54 for ; Thu, 9 Nov 2023 00:26:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699518399; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Njz4/b5bo15ySMP/bF0+MYzhoLeL9P4ryQvqfLfCFXQ=; b=THuNNSs//ENwFS1+2TW05LlMloca5Org88OQqCyT9XOPL1Ar9S9j7oT2esLHAcGlBSqNCF kRVvfAfT5BHrMqyXAwrCks+SKyLyHhsgcPJ8DAiBf3cRTZgNA63SJMO7YRUyPYZUJK+cPY Umofg2LJgJaxpfhRRaW7EFDI0oqlPyE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-650-2tnPhKtHODmVPmiTZDV0PQ-1; Thu, 09 Nov 2023 03:26:36 -0500 X-MC-Unique: 2tnPhKtHODmVPmiTZDV0PQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 063ED837225; Thu, 9 Nov 2023 08:26:36 +0000 (UTC) Received: from li-a71a4dcc-35d1-11b2-a85c-951838863c8d.ibm.com.com (unknown [10.72.112.221]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA68A2026D68; Thu, 9 Nov 2023 08:26:32 +0000 (UTC) From: xiubli@redhat.com To: ceph-devel@vger.kernel.org Cc: idryomov@gmail.com, jlayton@kernel.org, vshankar@redhat.com, mchangir@redhat.com, Xiubo Li Subject: [PATCH 3/5] ceph: check the cephx mds auth access for setattr Date: Thu, 9 Nov 2023 16:24:07 +0800 Message-ID: <20231109082409.417726-4-xiubli@redhat.com> In-Reply-To: <20231109082409.417726-1-xiubli@redhat.com> References: <20231109082409.417726-1-xiubli@redhat.com> Precedence: bulk X-Mailing-List: ceph-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Xiubo Li If we hit any failre just try to force it to do the sync setattr. URL: https://tracker.ceph.com/issues/61333 Signed-off-by: Xiubo Li --- fs/ceph/inode.c | 45 ++++++++++++++++++++++++++++++++++++--------- 1 file changed, 36 insertions(+), 9 deletions(-) diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index b01728aaed4b..f984babbb68c 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -2487,6 +2487,33 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, bool lock_snap_rwsem = false; bool fill_fscrypt; int truncate_retry = 20; /* The RMW will take around 50ms */ + struct dentry *dentry; + char *path; + int pathlen; + u64 pathbase; + bool do_sync = false; + + dentry = d_find_alias(inode); + if (!dentry) { + do_sync = true; + } else { + path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0); + if (IS_ERR(path)) { + do_sync = true; + err = 0; + } else { + err = ceph_mds_check_access(mdsc, path, MAY_WRITE); + } + dput(dentry); + + /* For none EACCES cases will let the MDS do the mds auth check */ + if (err == -EACCES) { + return err; + } else if (err < 0) { + do_sync = true; + err = 0; + } + } retry: prealloc_cf = ceph_alloc_cap_flush(); @@ -2533,7 +2560,7 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, /* It should never be re-set once set */ WARN_ON_ONCE(ci->fscrypt_auth); - if (issued & CEPH_CAP_AUTH_EXCL) { + if (!do_sync && (issued & CEPH_CAP_AUTH_EXCL)) { dirtied |= CEPH_CAP_AUTH_EXCL; kfree(ci->fscrypt_auth); ci->fscrypt_auth = (u8 *)cia->fscrypt_auth; @@ -2562,7 +2589,7 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, ceph_vinop(inode), from_kuid(&init_user_ns, inode->i_uid), from_kuid(&init_user_ns, attr->ia_uid)); - if (issued & CEPH_CAP_AUTH_EXCL) { + if (!do_sync && (issued & CEPH_CAP_AUTH_EXCL)) { inode->i_uid = fsuid; dirtied |= CEPH_CAP_AUTH_EXCL; } else if ((issued & CEPH_CAP_AUTH_SHARED) == 0 || @@ -2580,7 +2607,7 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, ceph_vinop(inode), from_kgid(&init_user_ns, inode->i_gid), from_kgid(&init_user_ns, attr->ia_gid)); - if (issued & CEPH_CAP_AUTH_EXCL) { + if (!do_sync && (issued & CEPH_CAP_AUTH_EXCL)) { inode->i_gid = fsgid; dirtied |= CEPH_CAP_AUTH_EXCL; } else if ((issued & CEPH_CAP_AUTH_SHARED) == 0 || @@ -2594,7 +2621,7 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, if (ia_valid & ATTR_MODE) { doutc(cl, "%p %llx.%llx mode 0%o -> 0%o\n", inode, ceph_vinop(inode), inode->i_mode, attr->ia_mode); - if (issued & CEPH_CAP_AUTH_EXCL) { + if (!do_sync && (issued & CEPH_CAP_AUTH_EXCL)) { inode->i_mode = attr->ia_mode; dirtied |= CEPH_CAP_AUTH_EXCL; } else if ((issued & CEPH_CAP_AUTH_SHARED) == 0 || @@ -2611,11 +2638,11 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, inode, ceph_vinop(inode), inode->i_atime.tv_sec, inode->i_atime.tv_nsec, attr->ia_atime.tv_sec, attr->ia_atime.tv_nsec); - if (issued & CEPH_CAP_FILE_EXCL) { + if (!do_sync && (issued & CEPH_CAP_FILE_EXCL)) { ci->i_time_warp_seq++; inode->i_atime = attr->ia_atime; dirtied |= CEPH_CAP_FILE_EXCL; - } else if ((issued & CEPH_CAP_FILE_WR) && + } else if (!do_sync && (issued & CEPH_CAP_FILE_WR) && timespec64_compare(&inode->i_atime, &attr->ia_atime) < 0) { inode->i_atime = attr->ia_atime; @@ -2651,7 +2678,7 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, CEPH_FSCRYPT_BLOCK_SIZE)); req->r_fscrypt_file = attr->ia_size; fill_fscrypt = true; - } else if ((issued & CEPH_CAP_FILE_EXCL) && attr->ia_size >= isize) { + } else if (!do_sync && (issued & CEPH_CAP_FILE_EXCL) && attr->ia_size >= isize) { if (attr->ia_size > isize) { i_size_write(inode, attr->ia_size); inode->i_blocks = calc_inode_blocks(attr->ia_size); @@ -2686,11 +2713,11 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode, inode, ceph_vinop(inode), inode->i_mtime.tv_sec, inode->i_mtime.tv_nsec, attr->ia_mtime.tv_sec, attr->ia_mtime.tv_nsec); - if (issued & CEPH_CAP_FILE_EXCL) { + if (!do_sync && (issued & CEPH_CAP_FILE_EXCL)) { ci->i_time_warp_seq++; inode->i_mtime = attr->ia_mtime; dirtied |= CEPH_CAP_FILE_EXCL; - } else if ((issued & CEPH_CAP_FILE_WR) && + } else if (!do_sync && (issued & CEPH_CAP_FILE_WR) && timespec64_compare(&inode->i_mtime, &attr->ia_mtime) < 0) { inode->i_mtime = attr->ia_mtime; From patchwork Thu Nov 9 08:24:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 742608 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED37163B2 for ; Thu, 9 Nov 2023 08:26:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="g5rZ43ri" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CD282D50 for ; Thu, 9 Nov 2023 00:26:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699518406; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+/uFsFoj5WgrsQrmZ26rZrk/q8DLO3tKy2Wiy8oFYUY=; b=g5rZ43riNldmvU0QWkfTTNDEAE6UXePNo++Rhp6drfSTvgeYcDRepQPylg7fkyfWv+phrr pYvH5ki70Cj59trGYIneLDIl9QevfXmsaZm2vVf+vcLX5lkqLcHgRUqBNUkeiC86ehNEIy ltdhu7ebTNZFRZzlp1lqSvRcxNNnjho= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-690-ZRyLg_7GPsegsR7Pv6rHnw-1; Thu, 09 Nov 2023 03:26:43 -0500 X-MC-Unique: ZRyLg_7GPsegsR7Pv6rHnw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D3BDB803307; Thu, 9 Nov 2023 08:26:42 +0000 (UTC) Received: from li-a71a4dcc-35d1-11b2-a85c-951838863c8d.ibm.com.com (unknown [10.72.112.221]) by smtp.corp.redhat.com (Postfix) with ESMTP id B71032027019; Thu, 9 Nov 2023 08:26:36 +0000 (UTC) From: xiubli@redhat.com To: ceph-devel@vger.kernel.org Cc: idryomov@gmail.com, jlayton@kernel.org, vshankar@redhat.com, mchangir@redhat.com, Xiubo Li Subject: [PATCH 4/5] ceph: check the cephx mds auth access for open Date: Thu, 9 Nov 2023 16:24:08 +0800 Message-ID: <20231109082409.417726-5-xiubli@redhat.com> In-Reply-To: <20231109082409.417726-1-xiubli@redhat.com> References: <20231109082409.417726-1-xiubli@redhat.com> Precedence: bulk X-Mailing-List: ceph-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Xiubo Li Before opening the file locally we need to check the cephx access. URL: https://tracker.ceph.com/issues/61333 Signed-off-by: Xiubo Li --- fs/ceph/file.c | 34 ++++++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 227fc226227b..8e9178446fdd 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -365,6 +365,12 @@ int ceph_open(struct inode *inode, struct file *file) struct ceph_file_info *fi = file->private_data; int err; int flags, fmode, wanted; + struct dentry *dentry; + char *path; + int pathlen; + u64 pathbase; + bool do_sync = false; + int mask = MAY_READ; if (fi) { doutc(cl, "file %p is already opened\n", file); @@ -386,6 +392,30 @@ int ceph_open(struct inode *inode, struct file *file) fmode = ceph_flags_to_mode(flags); wanted = ceph_caps_for_mode(fmode); + if (fmode & CEPH_FILE_MODE_WR) + mask |= MAY_WRITE; + dentry = d_find_alias(inode); + if (!dentry) { + do_sync = true; + } else { + path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0); + if (IS_ERR(path)) { + do_sync = true; + err = 0; + } else { + err = ceph_mds_check_access(mdsc, path, mask); + } + dput(dentry); + + /* For none EACCES cases will let the MDS do the mds auth check */ + if (err == -EACCES) { + return err; + } else if (err < 0) { + do_sync = true; + err = 0; + } + } + /* snapped files are read-only */ if (ceph_snap(inode) != CEPH_NOSNAP && (file->f_mode & FMODE_WRITE)) return -EROFS; @@ -401,7 +431,7 @@ int ceph_open(struct inode *inode, struct file *file) * asynchronously. */ spin_lock(&ci->i_ceph_lock); - if (__ceph_is_any_real_caps(ci) && + if (!do_sync && __ceph_is_any_real_caps(ci) && (((fmode & CEPH_FILE_MODE_WR) == 0) || ci->i_auth_cap)) { int mds_wanted = __ceph_caps_mds_wanted(ci, true); int issued = __ceph_caps_issued(ci, NULL); @@ -419,7 +449,7 @@ int ceph_open(struct inode *inode, struct file *file) ceph_check_caps(ci, 0); return ceph_init_file(inode, file, fmode); - } else if (ceph_snap(inode) != CEPH_NOSNAP && + } else if (!do_sync && ceph_snap(inode) != CEPH_NOSNAP && (ci->i_snap_caps & wanted) == wanted) { __ceph_touch_fmode(ci, mdsc, fmode); spin_unlock(&ci->i_ceph_lock); From patchwork Thu Nov 9 08:24:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 742861 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D95F163B2 for ; Thu, 9 Nov 2023 08:26:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WB6tjXsR" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59DDF2D56 for ; Thu, 9 Nov 2023 00:26:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699518410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JAnUYzloedoFDPPTlwLtgnbKwxrbypRMwN7LjhcWoLg=; b=WB6tjXsRl45xEgYqq/Cse/d8IPcxIlAm9kHn8FQ3l3TGq8BzX95W7v+7SkvRMISwiOUNr4 lUMj4I1jbEL1OsWU8bi/YGZOo/OfSOu9e2lT44mK9cTekqkhQNlN3rLEtlfNagVpK5Ymw2 RfN3KHX2pOsa99lJ8h2sB7O4JNAQIME= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-689-F9bsFQbYOti_D24rdC9rGg-1; Thu, 09 Nov 2023 03:26:46 -0500 X-MC-Unique: F9bsFQbYOti_D24rdC9rGg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9903E811E7E; Thu, 9 Nov 2023 08:26:46 +0000 (UTC) Received: from li-a71a4dcc-35d1-11b2-a85c-951838863c8d.ibm.com.com (unknown [10.72.112.221]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8D4492026D68; Thu, 9 Nov 2023 08:26:43 +0000 (UTC) From: xiubli@redhat.com To: ceph-devel@vger.kernel.org Cc: idryomov@gmail.com, jlayton@kernel.org, vshankar@redhat.com, mchangir@redhat.com, Xiubo Li Subject: [PATCH 5/5] ceph: check the cephx mds auth access for async dirop Date: Thu, 9 Nov 2023 16:24:09 +0800 Message-ID: <20231109082409.417726-6-xiubli@redhat.com> In-Reply-To: <20231109082409.417726-1-xiubli@redhat.com> References: <20231109082409.417726-1-xiubli@redhat.com> Precedence: bulk X-Mailing-List: ceph-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Xiubo Li Before doing the op locally we need to check the cephx access. URL: https://tracker.ceph.com/issues/61333 Signed-off-by: Xiubo Li --- fs/ceph/dir.c | 27 +++++++++++++++++++++++++++ fs/ceph/file.c | 25 +++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c index 91709934c8b1..e50f16a566f7 100644 --- a/fs/ceph/dir.c +++ b/fs/ceph/dir.c @@ -1336,8 +1336,12 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry) struct inode *inode = d_inode(dentry); struct ceph_mds_request *req; bool try_async = ceph_test_mount_opt(fsc, ASYNC_DIROPS); + struct dentry *dn; int err = -EROFS; int op; + char *path; + int pathlen; + u64 pathbase; if (ceph_snap(dir) == CEPH_SNAPDIR) { /* rmdir .snap/foo is RMSNAP */ @@ -1351,6 +1355,29 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry) CEPH_MDS_OP_RMDIR : CEPH_MDS_OP_UNLINK; } else goto out; + + dn = d_find_alias(dir); + if (!dn) { + try_async = false; + } else { + path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0); + if (IS_ERR(path)) { + try_async = false; + err = 0; + } else { + err = ceph_mds_check_access(mdsc, path, MAY_WRITE); + } + dput(dn); + + /* For none EACCES cases will let the MDS do the mds auth check */ + if (err == -EACCES) { + return err; + } else if (err < 0) { + try_async = false; + err = 0; + } + } + retry: req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS); if (IS_ERR(req)) { diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 8e9178446fdd..c8bced90244b 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -788,6 +788,9 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry, bool try_async = ceph_test_mount_opt(fsc, ASYNC_DIROPS); int mask; int err; + char *path; + int pathlen; + u64 pathbase; doutc(cl, "%p %llx.%llx dentry %p '%pd' %s flags %d mode 0%o\n", dir, ceph_vinop(dir), dentry, dentry, @@ -805,6 +808,28 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry, */ flags &= ~O_TRUNC; + dn = d_find_alias(dir); + if (!dn) { + try_async = false; + } else { + path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0); + if (IS_ERR(path)) { + try_async = false; + err = 0; + } else { + err = ceph_mds_check_access(mdsc, path, MAY_WRITE); + } + dput(dn); + + /* For none EACCES cases will let the MDS do the mds auth check */ + if (err == -EACCES) { + return err; + } else if (err < 0) { + try_async = false; + err = 0; + } + } + retry: if (flags & O_CREAT) { if (ceph_quota_is_max_files_exceeded(dir))