From patchwork Mon Sep 26 15:44:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 609911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18E23C32771 for ; Mon, 26 Sep 2022 16:52:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbiIZQwA (ORCPT ); Mon, 26 Sep 2022 12:52:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229873AbiIZQvn (ORCPT ); Mon, 26 Sep 2022 12:51:43 -0400 Received: from frasgout12.his.huawei.com (frasgout12.his.huawei.com [14.137.139.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1B32564F8; Mon, 26 Sep 2022 08:45:16 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.18.147.228]) by frasgout12.his.huawei.com (SkyGuard) with ESMTP id 4Mbn5Y2zrVz9xGYg; Mon, 26 Sep 2022 23:40:33 +0800 (CST) Received: from huaweicloud.com (unknown [10.204.63.22]) by APP2 (Coremail) with SMTP id GxC2BwB3pF7lyDFjZ4d6AA--.42219S3; Mon, 26 Sep 2022 16:44:56 +0100 (CET) From: Roberto Sassu To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, mykolal@fb.com, shuah@kernel.org, oss@lmb.io Cc: bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, fengc@google.com, davem@davemloft.net Subject: [RFC][PATCH 1/3] libbpf: Define bpf_get_fd_opts and introduce bpf_map_get_fd_by_id_opts() Date: Mon, 26 Sep 2022 17:44:28 +0200 Message-Id: <20220926154430.1552800-2-roberto.sassu@huaweicloud.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> References: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: GxC2BwB3pF7lyDFjZ4d6AA--.42219S3 X-Coremail-Antispam: 1UD129KBjvJXoWxWrykWF4kWF4kKw45uryfJFb_yoW5uryrpa 9xKr1jkr15WrW5ua98ZF4Yywn8GFyxWw4xK397Jr13ZrnrXFsrXr1IyF43KryavrWkuwsF gr4Ykry8Kr1xZFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWUCVW8JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6r4UJVWxJr1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2 WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_Wr ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07jfcTPUUUUU= X-CM-SenderInfo: purev21wro2thvvxqx5xdzvxpfor3voofrz/1tbiAgAHBF1jj3941QAAsi X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Roberto Sassu Define a new data structure called bpf_get_fd_opts, with the member open_flags, to be used by callers of the _opts variants of bpf_*_get_fd_by_id() to specify the permissions needed for the file descriptor to be obtained. Also, introduce bpf_map_get_fd_by_id_opts(), to let the caller pass a bpf_get_fd_opts structure. Finally, keep the existing bpf_map_get_fd_by_id(), and call bpf_map_get_fd_by_id_opts() with NULL as opts argument, to request read-write permissions (current behavior). Signed-off-by: Roberto Sassu --- tools/lib/bpf/bpf.c | 12 +++++++++++- tools/lib/bpf/bpf.h | 10 ++++++++++ tools/lib/bpf/libbpf.map | 3 ++- 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 1d49a0352836..4b03063edf1d 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -948,19 +948,29 @@ int bpf_prog_get_fd_by_id(__u32 id) return libbpf_err_errno(fd); } -int bpf_map_get_fd_by_id(__u32 id) +int bpf_map_get_fd_by_id_opts(__u32 id, + const struct bpf_get_fd_opts *opts) { const size_t attr_sz = offsetofend(union bpf_attr, open_flags); union bpf_attr attr; int fd; + if (!OPTS_VALID(opts, bpf_get_fd_opts)) + return libbpf_err(-EINVAL); + memset(&attr, 0, attr_sz); attr.map_id = id; + attr.open_flags = OPTS_GET(opts, open_flags, 0); fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, attr_sz); return libbpf_err_errno(fd); } +int bpf_map_get_fd_by_id(__u32 id) +{ + return bpf_map_get_fd_by_id_opts(id, NULL); +} + int bpf_btf_get_fd_by_id(__u32 id) { const size_t attr_sz = offsetofend(union bpf_attr, open_flags); diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 9c50beabdd14..38a1b7eccfc8 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -365,7 +365,17 @@ LIBBPF_API int bpf_prog_get_next_id(__u32 start_id, __u32 *next_id); LIBBPF_API int bpf_map_get_next_id(__u32 start_id, __u32 *next_id); LIBBPF_API int bpf_btf_get_next_id(__u32 start_id, __u32 *next_id); LIBBPF_API int bpf_link_get_next_id(__u32 start_id, __u32 *next_id); + +struct bpf_get_fd_opts { + size_t sz; /* size of this struct for forward/backward compatibility */ + __u32 open_flags; /* permissions requested for the operation on fd */ + __u32 :0; +}; +#define bpf_get_fd_opts__last_field open_flags + LIBBPF_API int bpf_prog_get_fd_by_id(__u32 id); +LIBBPF_API int bpf_map_get_fd_by_id_opts(__u32 id, + const struct bpf_get_fd_opts *opts); LIBBPF_API int bpf_map_get_fd_by_id(__u32 id); LIBBPF_API int bpf_btf_get_fd_by_id(__u32 id); LIBBPF_API int bpf_link_get_fd_by_id(__u32 id); diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index c1d6aa7c82b6..2e665b21d84f 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -367,10 +367,11 @@ LIBBPF_1.0.0 { libbpf_bpf_map_type_str; libbpf_bpf_prog_type_str; perf_buffer__buffer; -}; +} LIBBPF_0.8.0; LIBBPF_1.1.0 { global: + bpf_map_get_fd_by_id_opts; user_ring_buffer__discard; user_ring_buffer__free; user_ring_buffer__new; From patchwork Mon Sep 26 15:44:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 609460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E353C32771 for ; Mon, 26 Sep 2022 16:52:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229985AbiIZQwP (ORCPT ); Mon, 26 Sep 2022 12:52:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229459AbiIZQvu (ORCPT ); Mon, 26 Sep 2022 12:51:50 -0400 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8E45FD22; Mon, 26 Sep 2022 08:45:24 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.18.147.229]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4Mbn4J4rLbz9xqcV; Mon, 26 Sep 2022 23:39:28 +0800 (CST) Received: from huaweicloud.com (unknown [10.204.63.22]) by APP2 (Coremail) with SMTP id GxC2BwB3pF7lyDFjZ4d6AA--.42219S4; Mon, 26 Sep 2022 16:45:05 +0100 (CET) From: Roberto Sassu To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, mykolal@fb.com, shuah@kernel.org, oss@lmb.io Cc: bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, fengc@google.com, davem@davemloft.net Subject: [RFC][PATCH 2/3] bpf: Enforce granted permissions in a map fd at verifier level Date: Mon, 26 Sep 2022 17:44:29 +0200 Message-Id: <20220926154430.1552800-3-roberto.sassu@huaweicloud.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> References: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: GxC2BwB3pF7lyDFjZ4d6AA--.42219S4 X-Coremail-Antispam: 1UD129KBjvJXoW3Xry3Xr43Kw1kGFyxZw4Utwb_yoW3JFWDpF s5GFs7GF1kJa1S93y3JayDA345t3WrXw47Ca45K340y3ZIgrn29F40gF15ZFy5AF48Ar1I vr129rW5GayUArJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWUCVW8JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6r4UJVWxJr1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2 WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_Wr ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07j7hLnUUUUU= X-CM-SenderInfo: purev21wro2thvvxqx5xdzvxpfor3voofrz/1tbiAgAHBF1jj3941QABsj X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Roberto Sassu Commit afdb09c720b62 ("security: bpf: Add LSM hooks for bpf object related syscall") introduced new eBPF-related hooks in the LSM framework, for programs and maps, aiming at enforcing permissions per eBPF object. Commit 6e71b04a82248 ("bpf: Add file mode configuration into bpf maps") further introduced the BPF_F_RDONLY and BPF_F_WRONLY flags, for user space to request specific permissions when using a given eBPF object. The two patches are related, as the first ensures that LSMs grant to user space the requested permissions (read and/or write) for performing operations on an eBPF object. The second ensures that the granted permissions are sufficient to perform a requested operation. While the second check has been added for operations of the bpf() system call that directly deal with a map, such as BPF_MAP_*_ELEM, it is missing for bpf() system call operations that still receive a map fd, but modify a map indirectly: map iterators (addressed separately) and the eBPF verifier. An eBPF program might contain a map fd as argument of the BPF_PSEUDO_MAP_FD and BPF_PSEUDO_MAP_IDX instructions. The eBPF verifier processes those instructions, and replaces the map fd with the corresponding map address, which can be then passed to eBPF helpers, such as bpf_map_lookup_elem() and bpf_map_update_elem(). This has the same effect of invoking the bpf() system call and executing the BPF_MAP_*_ELEM operations. The problem is that, unlike BPF_MAP_*_ELEM operations of the bpf() system call, the eBPF verifier does not check the fd modes before letting the eBPF program do map operations. As a consequence, for example, a read-only fd can be provided to the eBPF program, allowing it to do a map update. A different behavior occurs when the map flags BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG are set at map creation time. Commit 591fe9888d78 ("bpf: add program side {rd, wr}only support for maps") ensures that only the map operations compatible with the map flags can be executed by the eBPF program, otherwise the verifier refuses to run that program. As the verifier can already restrict map operations, rely on the same mechanism to enforce permissions given with the fd. Providing a read-only fd has the same effect of setting the BPF_F_RDONLY_PROG map flag, except that the effect is limited to the program execution and not for the map lifetime. If multiple map fds are provided to the eBPF program, combine the fd modes, as the verifier is not able to track the exact fd a map address has been obtained from. Finally, make sure that the resulting fd modes don't give to the eBPF program more permissions than the ones granted by map flags. Instead, given the initial permissions granted by map flags, clear the ones that are missing from the fd. Although normally map fd-based operations are not affected by BPF_F_*_PROG, in this case it cannot be, as it is the eBPF program itself doing map operations, which is what BPF_F_*_PROG are designed to restrict. Cc: stable@vger.kernel.org Fixes: 6e71b04a82248 ("bpf: Add file mode configuration into bpf maps") Reported by: Lorenz Bauer Signed-off-by: Roberto Sassu --- include/linux/bpf.h | 13 +++++++++++++ include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 26 ++++++++++++++++++++++++-- 3 files changed, 38 insertions(+), 2 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index edd43edb27d6..1e18f11df7ca 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1415,6 +1415,19 @@ static inline u32 bpf_map_flags_to_cap(struct bpf_map *map) return BPF_MAP_CAN_READ | BPF_MAP_CAN_WRITE; } +static inline u32 bpf_fd_modes_to_cap(fmode_t mode) +{ + u32 cap = 0; + + if (mode & FMODE_CAN_READ) + cap |= BPF_MAP_CAN_READ; + + if (mode & FMODE_CAN_WRITE) + cap |= BPF_MAP_CAN_WRITE; + + return cap; +} + static inline bool bpf_map_flags_access_ok(u32 access_flags) { return (access_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG)) != diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 9e1e6965f407..3f490bae0bcd 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -501,6 +501,7 @@ struct bpf_verifier_env { struct bpf_verifier_state_list **explored_states; /* search pruning optimization */ struct bpf_verifier_state_list *free_list; struct bpf_map *used_maps[MAX_USED_MAPS]; /* array of map's used by eBPF program */ + u32 used_maps_caps[MAX_USED_MAPS]; /* array of map capabilities possessed by eBPF program */ struct btf_mod_pair used_btfs[MAX_USED_BTFS]; /* array of BTF's used by BPF program */ u32 used_map_cnt; /* number of used maps */ u32 used_btf_cnt; /* number of used BTF objects */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6f6d2d511c06..ac9bd4402169 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3531,6 +3531,14 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno, struct bpf_reg_state *regs = cur_regs(env); struct bpf_map *map = regs[regno].map_ptr; u32 cap = bpf_map_flags_to_cap(map); + int i; + + for (i = 0; i < env->used_map_cnt; i++) { + if (env->used_maps[i] == map) { + cap &= env->used_maps_caps[i]; + break; + } + } if (type == BPF_WRITE && !(cap & BPF_MAP_CAN_WRITE)) { verbose(env, "write into map forbidden, value_size=%d off=%d size=%d\n", @@ -7040,6 +7048,8 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, { struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx]; struct bpf_map *map = meta->map_ptr; + u32 cap; + int i; if (func_id != BPF_FUNC_tail_call && func_id != BPF_FUNC_map_lookup_elem && @@ -7058,11 +7068,20 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, return -EINVAL; } + cap = bpf_map_flags_to_cap(map); + + for (i = 0; i < env->used_map_cnt; i++) { + if (env->used_maps[i] == map) { + cap &= env->used_maps_caps[i]; + break; + } + } + /* In case of read-only, some additional restrictions * need to be applied in order to prevent altering the * state of the map from program side. */ - if ((map->map_flags & BPF_F_RDONLY_PROG) && + if (!(cap & BPF_MAP_CAN_WRITE) && (func_id == BPF_FUNC_map_delete_elem || func_id == BPF_FUNC_map_update_elem || func_id == BPF_FUNC_map_push_elem || @@ -12870,6 +12889,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) /* check whether we recorded this map already */ for (j = 0; j < env->used_map_cnt; j++) { if (env->used_maps[j] == map) { + env->used_maps_caps[j] |= bpf_fd_modes_to_cap(f.file->f_mode); aux->map_index = j; fdput(f); goto next_insn; @@ -12889,7 +12909,9 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) bpf_map_inc(map); aux->map_index = env->used_map_cnt; - env->used_maps[env->used_map_cnt++] = map; + env->used_maps[env->used_map_cnt] = map; + env->used_maps_caps[env->used_map_cnt] = bpf_fd_modes_to_cap(f.file->f_mode); + env->used_map_cnt++; if (bpf_map_is_cgroup_storage(map) && bpf_cgroup_storage_assign(env->prog->aux, map)) { From patchwork Mon Sep 26 15:44:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 609910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44C15C6FA82 for ; Mon, 26 Sep 2022 16:52:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbiIZQw2 (ORCPT ); Mon, 26 Sep 2022 12:52:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230004AbiIZQv7 (ORCPT ); Mon, 26 Sep 2022 12:51:59 -0400 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F06481E732; Mon, 26 Sep 2022 08:45:33 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.18.147.227]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4Mbn4T5Dhvz9xqcC; Mon, 26 Sep 2022 23:39:37 +0800 (CST) Received: from huaweicloud.com (unknown [10.204.63.22]) by APP2 (Coremail) with SMTP id GxC2BwB3pF7lyDFjZ4d6AA--.42219S5; Mon, 26 Sep 2022 16:45:14 +0100 (CET) From: Roberto Sassu To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, mykolal@fb.com, shuah@kernel.org, oss@lmb.io Cc: bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, fengc@google.com, davem@davemloft.net Subject: [RFC][PATCH 3/3] selftests/bpf: Test enforcement of map fd permissions at verifier level Date: Mon, 26 Sep 2022 17:44:30 +0200 Message-Id: <20220926154430.1552800-4-roberto.sassu@huaweicloud.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> References: <20220926154430.1552800-1-roberto.sassu@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: GxC2BwB3pF7lyDFjZ4d6AA--.42219S5 X-Coremail-Antispam: 1UD129KBjvJXoW3Gw17Ar47WFW3XF1rAr4fZrb_yoWfAFWkpa 4rXF1DGFW8Xr4YvayvgFy7ZF13KF4DXwsxAr92gr17ArsxX3Z3W3W8Ka13tr13JryrWFs3 uF4Ykr97Cay2yFUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWUCVW8JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6r4UJVWxJr1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2 WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkE bVWUJVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7 AKxVW8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02 F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_Wr ylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07jzE__UUUUU= X-CM-SenderInfo: purev21wro2thvvxqx5xdzvxpfor3voofrz/1tbiAQAHBF1jj4N4HAAAsU X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Roberto Sassu Create two maps, one read/writable and another only readable. Also, define four programs, that respectively read, read/write, read/write (with two map fds), and write to a given map. For the read/writable map, two additional fds are obtained to test the ability of the verifier to restrict operations by a program depending on the map permissions granted. To make testing easier, the map fd for the BPF_LD_MAP_FD instruction is always the same (20), and dup2() is used to make sure that the program takes the correct map at the time it is loaded. In addition, a second fd (21) set with dup2() is passed to one eBPF program to check the merging of fd modes (read-only and write-only). The tests first verify the correct behavior, i.e. a program is successfully executed if it has sufficient permissions on the map. Then, they verify the incorrect combinations (e.g. a program willing to perform read/write operations on a map referenced with a write-only fd), and ensure that the verifier emits the expected error message. Signed-off-by: Roberto Sassu --- .../selftests/bpf/prog_tests/map_fd_perm.c | 227 ++++++++++++++++++ 1 file changed, 227 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/map_fd_perm.c diff --git a/tools/testing/selftests/bpf/prog_tests/map_fd_perm.c b/tools/testing/selftests/bpf/prog_tests/map_fd_perm.c new file mode 100644 index 000000000000..eaabf6f5bb9b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/map_fd_perm.c @@ -0,0 +1,227 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + * + * Author: Roberto Sassu + */ + +#include + +#define TARGET_MAP_FD 20 +#define TARGET_MAP_FD2 21 +#define EXPECTED_MAP_VALUE 2 + +char bpf_log_buf[BPF_LOG_BUF_SIZE]; + +struct bpf_insn prog_r[] = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), /* *(u32 *)(fp - 4) = r0 */ + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */ + BPF_LD_MAP_FD(BPF_REG_1, TARGET_MAP_FD), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ + BPF_EXIT_INSN(), +}; + +struct bpf_insn prog_rw[] = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), /* *(u32 *)(fp - 4) = r0 */ + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */ + BPF_LD_MAP_FD(BPF_REG_1, TARGET_MAP_FD), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), + BPF_MOV64_IMM(BPF_REG_1, EXPECTED_MAP_VALUE), + BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_0, BPF_REG_1, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ + BPF_EXIT_INSN(), +}; + +struct bpf_insn prog_rw_merge[] = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), /* *(u32 *)(fp - 4) = r0 */ + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */ + BPF_LD_MAP_FD(BPF_REG_1, TARGET_MAP_FD), + BPF_LD_MAP_FD(BPF_REG_1, TARGET_MAP_FD2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), + BPF_MOV64_IMM(BPF_REG_1, EXPECTED_MAP_VALUE), + BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_0, BPF_REG_1, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ + BPF_EXIT_INSN(), +}; + +struct bpf_insn prog_w[] = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), /* *(u32 *)(fp - 4) = r0 */ + BPF_MOV64_IMM(BPF_REG_0, EXPECTED_MAP_VALUE), + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), /* *(u32 *)(fp - 8) = r0 */ + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */ + BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8), /* r3 = fp - 8 */ + BPF_LD_MAP_FD(BPF_REG_1, TARGET_MAP_FD), + BPF_MOV64_IMM(BPF_REG_4, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem), + BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ + BPF_EXIT_INSN(), +}; + +static int load_prog(struct bpf_insn *prog, int num_insn, int map_fd, + int map_fd2, int map_check_fd, int expected_map_value, + const char *expected_err_msg) +{ + u32 key = 0, value; + int ret, prog_fd, link_fd; + + LIBBPF_OPTS(bpf_prog_load_opts, trace_opts, + .expected_attach_type = BPF_TRACE_FENTRY, + .log_buf = bpf_log_buf, + .log_size = BPF_LOG_BUF_SIZE, + ); + + memset(bpf_log_buf, 0, sizeof(bpf_log_buf)); + + trace_opts.attach_btf_id = + libbpf_find_vmlinux_btf_id("array_map_lookup_elem", + trace_opts.expected_attach_type); + + ret = dup2(map_fd, TARGET_MAP_FD); + if (ret < 0) + return ret; + + if (map_fd2 != -1) { + ret = dup2(map_fd2, TARGET_MAP_FD2); + if (ret < 0) { + close(TARGET_MAP_FD); + return ret; + } + } + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_TRACING, NULL, "GPL", + prog, num_insn, &trace_opts); + + close(TARGET_MAP_FD); + if (map_fd2 != -1) + close(TARGET_MAP_FD2); + + if (prog_fd < 0) { + if (expected_err_msg && strstr(bpf_log_buf, expected_err_msg)) + return 0; + + printf("%s\n", bpf_log_buf); + return -EINVAL; + } + + if (map_check_fd >= 0) { + link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_FENTRY, NULL); + if (link_fd < 0) { + ret = -errno; + close(prog_fd); + return ret; + } + + ret = bpf_map_lookup_elem(map_check_fd, &key, &value); + + close(prog_fd); + close(link_fd); + + if (ret < 0) + return ret; + + if (value != expected_map_value) + return -EINVAL; + } else { + close(prog_fd); + } + + return 0; +} + +void test_map_fd_perm(void) +{ + int map_fd, map_fd_rdonly, map_fd_wronly; + int map_rdonly_fd; + struct bpf_map_info info_m = { 0 }; + __u32 len = sizeof(info_m); + int ret; + + DECLARE_LIBBPF_OPTS(bpf_get_fd_opts, fd_opts_rdonly, + .open_flags = BPF_F_RDONLY, + ); + + DECLARE_LIBBPF_OPTS(bpf_get_fd_opts, fd_opts_wronly, + .open_flags = BPF_F_WRONLY, + ); + + DECLARE_LIBBPF_OPTS(bpf_map_create_opts, create_opts, + .map_flags = BPF_F_RDONLY_PROG, + ); + + map_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, sizeof(u32), + sizeof(u32), 1, NULL); + ASSERT_GE(map_fd, 0, "failed to create rw map"); + + map_rdonly_fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, NULL, sizeof(u32), + sizeof(u32), 1, &create_opts); + ASSERT_GE(map_rdonly_fd, 0, "failed to create ro map"); + + ret = bpf_obj_get_info_by_fd(map_fd, &info_m, &len); + ASSERT_OK(ret, "bpf_obj_get_info_by_fd"); + + map_fd_rdonly = bpf_map_get_fd_by_id_opts(info_m.id, &fd_opts_rdonly); + ASSERT_GE(map_fd_rdonly, 0, "bpf_map_get_fd_by_id_opts rw map ro fd"); + + map_fd_wronly = bpf_map_get_fd_by_id_opts(info_m.id, &fd_opts_wronly); + ASSERT_GE(map_fd_wronly, 0, "bpf_map_get_fd_by_id_opts rw map wo fd"); + + ret = load_prog(prog_r, ARRAY_SIZE(prog_r), map_fd_rdonly, -1, -1, -1, + NULL); + ASSERT_OK(ret, "load ro prog, rw map, ro fd"); + + ret = load_prog(prog_rw, ARRAY_SIZE(prog_rw), map_fd, -1, map_fd, + EXPECTED_MAP_VALUE, NULL); + ASSERT_OK(ret, "load rw prog, rw map, rw fd"); + + ret = load_prog(prog_w, ARRAY_SIZE(prog_w), map_fd_wronly, -1, map_fd, + EXPECTED_MAP_VALUE, NULL); + ASSERT_OK(ret, "load wo prog, rw map, wo fd"); + + ret = load_prog(prog_r, ARRAY_SIZE(prog_r), map_rdonly_fd, -1, -1, -1, + NULL); + ASSERT_OK(ret, "load ro prog, ro map, ro fd"); + + /* Existing value was set by prog_w, so it is EXPECTED_MAP_VALUE * 2. */ + ret = load_prog(prog_rw_merge, ARRAY_SIZE(prog_rw_merge), map_fd_rdonly, + map_fd_wronly, map_fd, EXPECTED_MAP_VALUE * 2, NULL); + ASSERT_OK(ret, "load rw prog merge, ro fd, wo fd"); + + ret = load_prog(prog_r, ARRAY_SIZE(prog_r), map_fd_wronly, -1, -1, -1, + "read from map forbidden"); + ASSERT_OK(ret, "load ro prog, rw map, wo fd"); + + ret = load_prog(prog_w, ARRAY_SIZE(prog_w), map_fd_rdonly, -1, -1, -1, + "write into map forbidden"); + ASSERT_OK(ret, "load wo prog, rw map, ro fd"); + + ret = load_prog(prog_rw, ARRAY_SIZE(prog_rw), map_fd_rdonly, -1, -1, -1, + "write into map forbidden"); + ASSERT_OK(ret, "load rw prog, rw map, ro fd"); + + ret = load_prog(prog_rw, ARRAY_SIZE(prog_rw), map_fd_wronly, -1, -1, -1, + "read from map forbidden"); + ASSERT_OK(ret, "load rw prog, rw map, wo fd"); + + ret = load_prog(prog_w, ARRAY_SIZE(prog_w), map_rdonly_fd, -1, -1, -1, + "write into map forbidden"); + ASSERT_OK(ret, "load wo prog, ro map, rw fd"); +}