mbox series

[bpf-next,v3,0/3] Allow mmap of /sys/kernel/btf/vmlinux

Message ID 20250505-vmlinux-mmap-v3-0-5d53afa060e8@isovalent.com
Headers show
Series Allow mmap of /sys/kernel/btf/vmlinux | expand

Message

Lorenz Bauer May 5, 2025, 6:38 p.m. UTC
I'd like to cut down the memory usage of parsing vmlinux BTF in ebpf-go.
With some upcoming changes the library is sitting at 5MiB for a parse.
Most of that memory is simply copying the BTF blob into user space.
By allowing vmlinux BTF to be mmapped read-only into user space I can
cut memory usage by about 75%.

Signed-off-by: Lorenz Bauer <lmb@isovalent.com>
---
Changes in v3:
- Remove slightly confusing calculation of trailing (Alexei)
- Use vm_insert_page (Alexei)
- Simplified libbpf code
- Link to v2: https://lore.kernel.org/r/20250502-vmlinux-mmap-v2-0-95c271434519@isovalent.com

Changes in v2:
- Use btf__new in selftest
- Avoid vm_iomap_memory in btf_vmlinux_mmap
- Add VM_DONTDUMP
- Add support to libbpf
- Link to v1: https://lore.kernel.org/r/20250501-vmlinux-mmap-v1-0-aa2724572598@isovalent.com

---
Lorenz Bauer (3):
      btf: allow mmap of vmlinux btf
      selftests: bpf: add a test for mmapable vmlinux BTF
      libbpf: Use mmap to parse vmlinux BTF from sysfs

 include/asm-generic/vmlinux.lds.h                  |  3 +-
 kernel/bpf/sysfs_btf.c                             | 37 ++++++++++
 tools/lib/bpf/btf.c                                | 83 +++++++++++++++++++---
 tools/testing/selftests/bpf/prog_tests/btf_sysfs.c | 83 ++++++++++++++++++++++
 4 files changed, 194 insertions(+), 12 deletions(-)
---
base-commit: 38d976c32d85ef12dcd2b8a231196f7049548477
change-id: 20250501-vmlinux-mmap-2ec5563c3ef1

Best regards,

Comments

Andrii Nakryiko May 6, 2025, 9:39 p.m. UTC | #1
On Mon, May 5, 2025 at 11:39 AM Lorenz Bauer <lmb@isovalent.com> wrote:
>
> User space needs access to kernel BTF for many modern features of BPF.
> Right now each process needs to read the BTF blob either in pieces or
> as a whole. Allow mmaping the sysfs file so that processes can directly
> access the memory allocated for it in the kernel.
>
> Signed-off-by: Lorenz Bauer <lmb@isovalent.com>
> ---
>  include/asm-generic/vmlinux.lds.h |  3 ++-
>  kernel/bpf/sysfs_btf.c            | 37 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index 58a635a6d5bdf0c53c267c2a3d21a5ed8678ce73..1750390735fac7637cc4d2fa05f96cb2a36aa448 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -667,10 +667,11 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
>   */
>  #ifdef CONFIG_DEBUG_INFO_BTF
>  #define BTF                                                            \
> +       . = ALIGN(PAGE_SIZE);                                           \
>         .BTF : AT(ADDR(.BTF) - LOAD_OFFSET) {                           \
>                 BOUNDED_SECTION_BY(.BTF, _BTF)                          \
>         }                                                               \
> -       . = ALIGN(4);                                                   \
> +       . = ALIGN(PAGE_SIZE);                                           \
>         .BTF_ids : AT(ADDR(.BTF_ids) - LOAD_OFFSET) {                   \
>                 *(.BTF_ids)                                             \
>         }
> diff --git a/kernel/bpf/sysfs_btf.c b/kernel/bpf/sysfs_btf.c
> index 81d6cf90584a7157929c50f62a5c6862e7a3d081..37278d7f38ae72f2d7efcfa859e86aaf12e39a25 100644
> --- a/kernel/bpf/sysfs_btf.c
> +++ b/kernel/bpf/sysfs_btf.c
> @@ -7,14 +7,51 @@
>  #include <linux/kobject.h>
>  #include <linux/init.h>
>  #include <linux/sysfs.h>
> +#include <linux/mm.h>
> +#include <linux/io.h>
> +#include <linux/btf.h>
>
>  /* See scripts/link-vmlinux.sh, gen_btf() func for details */
>  extern char __start_BTF[];
>  extern char __stop_BTF[];
>
> +static int btf_sysfs_vmlinux_mmap(struct file *filp, struct kobject *kobj,
> +                                 const struct bin_attribute *attr,
> +                                 struct vm_area_struct *vma)
> +{
> +       unsigned long pages = PAGE_ALIGN(attr->size) >> PAGE_SHIFT;
> +       size_t vm_size = vma->vm_end - vma->vm_start;
> +       unsigned long addr = (unsigned long)attr->private;
> +       int i, err = 0;
> +
> +       if (addr != (unsigned long)__start_BTF || !PAGE_ALIGNED(addr))
> +               return -EINVAL;
> +
> +       if (vma->vm_pgoff)
> +               return -EINVAL;

any particular reason to not allow vm_pgoff?

> +
> +       if (vma->vm_flags & (VM_WRITE | VM_EXEC | VM_MAYSHARE))
> +               return -EACCES;
> +
> +       if (vm_size >> PAGE_SHIFT > pages)

() around shift operation, please, for those of us who haven't
memorized the entire C operator precedence table ;)

> +               return -EINVAL;
> +
> +       vm_flags_mod(vma, VM_DONTDUMP, VM_MAYEXEC | VM_MAYWRITE);
> +
> +       for (i = 0; i < pages && !err; i++, addr += PAGE_SIZE)
> +               err = vm_insert_page(vma, vma->vm_start + i * PAGE_SIZE,
> +                                    virt_to_page(addr));
> +
> +       if (err)
> +               zap_vma_pages(vma);

it's certainly subjective, but I find this error handling with !err in
for loop condition hard to follow. What's wrong with arguably more
straightforward (and as you can see I'm not a big fan of mutated addr
but calculated vma->vm_start + i * PAGE_SIZE: pick one style one
follow it for both entities?):


for (i = 0; i < pages; i++) {
    err = vm_insert_page(vma, vma->vm_start + i * PAGE_SIZE,
                         virt_to_page(addr + i * PAGE_SIZE));
    if (err) {
        zap_vma_pages(vma);
        return err;
    }
}

return 0;

?


> +
> +       return err;
> +}
> +
>  static struct bin_attribute bin_attr_btf_vmlinux __ro_after_init = {
>         .attr = { .name = "vmlinux", .mode = 0444, },
>         .read_new = sysfs_bin_attr_simple_read,
> +       .mmap = btf_sysfs_vmlinux_mmap,
>  };
>
>  struct kobject *btf_kobj;
>
> --
> 2.49.0
>
Andrii Nakryiko May 6, 2025, 9:39 p.m. UTC | #2
On Mon, May 5, 2025 at 11:39 AM Lorenz Bauer <lmb@isovalent.com> wrote:
>
> Add a basic test for the ability to mmap /sys/kernel/btf/vmlinux. Since
> libbpf doesn't have an API to parse BTF from memory we do some basic
> sanity checks ourselves.
>
> Signed-off-by: Lorenz Bauer <lmb@isovalent.com>
> ---
>  tools/testing/selftests/bpf/prog_tests/btf_sysfs.c | 83 ++++++++++++++++++++++
>  1 file changed, 83 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/btf_sysfs.c b/tools/testing/selftests/bpf/prog_tests/btf_sysfs.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..3319cf758897d46cefa8ca25e16acb162f4e9889
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/btf_sysfs.c
> @@ -0,0 +1,83 @@
> +// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
> +/* Copyright (c) 2025 Isovalent */
> +
> +#include <test_progs.h>
> +#include <bpf/btf.h>
> +#include <sys/stat.h>
> +#include <sys/mman.h>
> +#include <fcntl.h>
> +#include <unistd.h>
> +
> +static void test_btf_mmap_sysfs(const char *path, struct btf *base)
> +{
> +       struct stat st;
> +       __u64 btf_size, end;
> +       void *raw_data = NULL;
> +       int fd = -1;
> +       long page_size;
> +       struct btf *btf = NULL;
> +
> +       page_size = sysconf(_SC_PAGESIZE);
> +       if (!ASSERT_GE(page_size, 0, "get_page_size"))
> +               goto cleanup;
> +
> +       if (!ASSERT_OK(stat(path, &st), "stat_btf"))
> +               goto cleanup;
> +
> +       btf_size = st.st_size;
> +       end = (btf_size + page_size - 1) / page_size * page_size;
> +
> +       fd = open(path, O_RDONLY);
> +       if (!ASSERT_GE(fd, 0, "open_btf"))
> +               goto cleanup;
> +
> +       raw_data = mmap(NULL, btf_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
> +       if (!ASSERT_EQ(raw_data, MAP_FAILED, "mmap_btf_writable"))
> +               goto cleanup;
> +
> +       raw_data = mmap(NULL, btf_size, PROT_READ, MAP_SHARED, fd, 0);
> +       if (!ASSERT_EQ(raw_data, MAP_FAILED, "mmap_btf_shared"))
> +               goto cleanup;
> +
> +       raw_data = mmap(NULL, end + 1, PROT_READ, MAP_PRIVATE, fd, 0);
> +       if (!ASSERT_EQ(raw_data, MAP_FAILED, "mmap_btf_invalid_size"))
> +               goto cleanup;
> +
> +       raw_data = mmap(NULL, end, PROT_READ, MAP_PRIVATE, fd, 0);
> +       if (!ASSERT_NEQ(raw_data, MAP_FAILED, "mmap_btf"))

ASSERT_OK_PTR()?

> +               goto cleanup;
> +
> +       if (!ASSERT_EQ(mprotect(raw_data, btf_size, PROT_READ | PROT_WRITE), -1,
> +           "mprotect_writable"))
> +               goto cleanup;
> +
> +       if (!ASSERT_EQ(mprotect(raw_data, btf_size, PROT_READ | PROT_EXEC), -1,
> +           "mprotect_executable"))
> +               goto cleanup;
> +
> +       /* Check padding is zeroed */
> +       for (int i = btf_size; i < end; i++) {
> +               if (((__u8 *)raw_data)[i] != 0) {
> +                       PRINT_FAIL("tail of BTF is not zero at page offset %d\n", i);
> +                       goto cleanup;
> +               }
> +       }
> +
> +       btf = btf__new_split(raw_data, btf_size, base);
> +       if (!ASSERT_NEQ(btf, NULL, "parse_btf"))

ASSERT_OK_PTR()

> +               goto cleanup;
> +
> +cleanup:
> +       if (raw_data && raw_data != MAP_FAILED)
> +               munmap(raw_data, btf_size);
> +       if (btf)

no need to check this, all libbpf destructor APIs deal with NULL
correctly (ignoring them)

> +               btf__free(btf);
> +       if (fd >= 0)
> +               close(fd);
> +}
> +
> +void test_btf_sysfs(void)
> +{
> +       if (test__start_subtest("vmlinux"))
> +               test_btf_mmap_sysfs("/sys/kernel/btf/vmlinux", NULL);

Do you intend to add more subtests? if not, why even using a subtest structure

> +}

>
> --
> 2.49.0
>
Lorenz Bauer May 7, 2025, 9:06 a.m. UTC | #3
On Tue, May 6, 2025 at 10:39 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> > +       if (vma->vm_pgoff)
> > +               return -EINVAL;
>
> any particular reason to not allow vm_pgoff?

Doesn't seem particularly useful because the header is at offset 0,
and I don't trust myself to get the overflow checks done right.

> it's certainly subjective, but I find this error handling with !err in
> for loop condition hard to follow. What's wrong with arguably more
> straightforward (and as you can see I'm not a big fan of mutated addr
> but calculated vma->vm_start + i * PAGE_SIZE: pick one style one
> follow it for both entities?):

Yeah that's nicer, I was just going off of what Alexei proposed.
Lorenz Bauer May 7, 2025, 9:14 a.m. UTC | #4
On Tue, May 6, 2025 at 10:39 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:

> > +       raw_data = mmap(NULL, end, PROT_READ, MAP_PRIVATE, fd, 0);
> > +       if (!ASSERT_NEQ(raw_data, MAP_FAILED, "mmap_btf"))
>
> ASSERT_OK_PTR()?

Don't think that mmap follows libbpf_get_error conventions? I'd keep
it as it is.

> > +       btf = btf__new_split(raw_data, btf_size, base);
> > +       if (!ASSERT_NEQ(btf, NULL, "parse_btf"))
>
> ASSERT_OK_PTR()

Ack.

> Do you intend to add more subtests? if not, why even using a subtest structure

The original intention was to add kmod support, but that didn't pan
out, see my discussion with Alexei. I can drop the subtest if you
want, but I'd probably keep the helper as it is.
Andrii Nakryiko May 7, 2025, 6:18 p.m. UTC | #5
On Wed, May 7, 2025 at 2:14 AM Lorenz Bauer <lmb@isovalent.com> wrote:
>
> On Tue, May 6, 2025 at 10:39 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
>
> > > +       raw_data = mmap(NULL, end, PROT_READ, MAP_PRIVATE, fd, 0);
> > > +       if (!ASSERT_NEQ(raw_data, MAP_FAILED, "mmap_btf"))
> >
> > ASSERT_OK_PTR()?
>
> Don't think that mmap follows libbpf_get_error conventions? I'd keep
> it as it is.

ASSERT_OK_PTR() isn't libbpf specific (and libbpf is actually
returning a NULL or valid pointer for all public APIs, since libbpf
1.0). But if you look at the implementation, "an OK" pointer is a
non-NULL pointer that is also not a small negative value. NULL is a
bad pointer, -1 (MAP_FAILED) is a bad pointer, and so on. So it's a
pretty universal check for anything pointer-related. Please do use
OK_PTR, it's semantically better in tests

>
> > > +       btf = btf__new_split(raw_data, btf_size, base);
> > > +       if (!ASSERT_NEQ(btf, NULL, "parse_btf"))
> >
> > ASSERT_OK_PTR()
>
> Ack.
>
> > Do you intend to add more subtests? if not, why even using a subtest structure
>
> The original intention was to add kmod support, but that didn't pan
> out, see my discussion with Alexei. I can drop the subtest if you
> want, but I'd probably keep the helper as it is.

yeah, let's drop the subtest, it's a bit easier to work with
non-subtest tests, IMO