From patchwork Thu Jul 15 00:54:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 477883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 035E4C47E50 for ; Thu, 15 Jul 2021 00:54:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD2F0610C7 for ; Thu, 15 Jul 2021 00:54:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231433AbhGOA5e (ORCPT ); Wed, 14 Jul 2021 20:57:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231196AbhGOA5S (ORCPT ); Wed, 14 Jul 2021 20:57:18 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8734C06175F; Wed, 14 Jul 2021 17:54:25 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id p14-20020a17090ad30eb02901731c776526so4986225pju.4; Wed, 14 Jul 2021 17:54:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yrnsqNBXx5G4B7kEH7sQS0AwgCfuvHTBIEke+GNH3SE=; b=rm+PsDA5SeS8FIyoe4gIvPxQ1lWyZQFouS8TcotSjxVsmmrrny8euBP4mp6kWGwmA1 8B8MCOY+95hka6vRQSJfuHrvv8hMEPu62YL+0erBwXuYN+mCl1AFl5BBu9nLF62cAPpG zg692vYflztzjKirnrdVtNJbxE786HCOjh9X39tOsX/JnvEvGBCW/X/bHMNZ86bnMuks 0cVRWFvmKXWAA8xiQbeSws7LYpmKWEqKg5xYSb+lwp33ECu5PBJDTfiEwb5SuRrDYAu3 dIWWaxeXF8ytUnIv6J9Q0YixBuAX3nNYPtvBCfa/rmQmb6kVXEjfS0WeMvrWJp4eNVyH lQXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yrnsqNBXx5G4B7kEH7sQS0AwgCfuvHTBIEke+GNH3SE=; b=q7dFmC8BV26x4dEzFrxXTHX+6P2jha397uz61xZu3KfiJ7vx0+9Qdx06mlWIziewW5 83jqc/EIhTHY3npDiLnGVtIN4mJ34fcHtUrI5IIrMODPmr3xgfJYftgKo9YYX4TodvSu VEVOD37Lm2H4EqgUvHScvuYBtr9GFMry/CVo5C7hKJGEhHTg/CkZxlERp6Mmy5dksqgg lIgDs3NA619NoA9x8eqfoqyBdzqBmNx8bSG1NU2F0VVZ5Q5PY+UheoaQOIMJxK/jQy/Y 6BPw2I9DrpjQT5JemPvqmFdf8YajWsfAp9g55A3NbV0ajhLpX0HRnm//kruJp4LPbAO6 hYiQ== X-Gm-Message-State: AOAM533lJirKVNUHLEb7/T3gJuGJsWw8VX810Cz+0yykSIV2/H4EFYsB zAVz9rucxTGOJDNn2pjWPZ2nplNAuIk= X-Google-Smtp-Source: ABdhPJyDUlrBIARqlNc+A20p8DFYVnlg3iyv9sFG46yFsiE3/S4UATuc8nDV6Vrl+cc8XaeGrQCTHg== X-Received: by 2002:a17:90a:b284:: with SMTP id c4mr545075pjr.213.1626310465272; Wed, 14 Jul 2021 17:54:25 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([2620:10d:c090:400::5:120c]) by smtp.gmail.com with ESMTPSA id nl2sm3439011pjb.10.2021.07.14.17.54.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jul 2021 17:54:24 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 bpf-next 02/11] bpf: Factor out bpf_spin_lock into helpers. Date: Wed, 14 Jul 2021 17:54:08 -0700 Message-Id: <20210715005417.78572-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210715005417.78572-1-alexei.starovoitov@gmail.com> References: <20210715005417.78572-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexei Starovoitov Move ____bpf_spin_lock/unlock into helpers to make it more clear that quadruple underscore bpf_spin_lock/unlock are irqsave/restore variants. Signed-off-by: Alexei Starovoitov Acked-by: Martin KaFai Lau Acked-by: Andrii Nakryiko --- kernel/bpf/helpers.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 62cf00383910..38be3cfc2f58 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -289,13 +289,18 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock) static DEFINE_PER_CPU(unsigned long, irqsave_flags); -notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock) +static inline void __bpf_spin_lock_irqsave(struct bpf_spin_lock *lock) { unsigned long flags; local_irq_save(flags); __bpf_spin_lock(lock); __this_cpu_write(irqsave_flags, flags); +} + +notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock) +{ + __bpf_spin_lock_irqsave(lock); return 0; } @@ -306,13 +311,18 @@ const struct bpf_func_proto bpf_spin_lock_proto = { .arg1_type = ARG_PTR_TO_SPIN_LOCK, }; -notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock) +static inline void __bpf_spin_unlock_irqrestore(struct bpf_spin_lock *lock) { unsigned long flags; flags = __this_cpu_read(irqsave_flags); __bpf_spin_unlock(lock); local_irq_restore(flags); +} + +notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock) +{ + __bpf_spin_unlock_irqrestore(lock); return 0; } @@ -333,9 +343,9 @@ void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, else lock = dst + map->spin_lock_off; preempt_disable(); - ____bpf_spin_lock(lock); + __bpf_spin_lock_irqsave(lock); copy_map_value(map, dst, src); - ____bpf_spin_unlock(lock); + __bpf_spin_unlock_irqrestore(lock); preempt_enable(); } From patchwork Thu Jul 15 00:54:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 477881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ABACC47E56 for ; Thu, 15 Jul 2021 00:54:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 349BE60FD8 for ; Thu, 15 Jul 2021 00:54:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233148AbhGOA5j (ORCPT ); Wed, 14 Jul 2021 20:57:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbhGOA5X (ORCPT ); Wed, 14 Jul 2021 20:57:23 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 158ECC06175F; Wed, 14 Jul 2021 17:54:31 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id y4so4274024pgl.10; Wed, 14 Jul 2021 17:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Sz1A1OLl4QcaQGX4xosKw+1RxVwaK5Ff6lkJu1391iQ=; b=kIyhFlZRGZgYX7tT8eCdC2wLjAzXSnUeSxNFjdp2rPzjhZkJ02i0qVqWW9H+nmwS8C Pj6kYOVjyz95pajvHcVk+JDUo13ec6cWX8VTh1Q6tT8DOQOiKxhvjGW4S4jltPQZiRmu xJ2TnC+PDHqpOVqgChW7IJlGScvLfymLchHj2BEciU4+mypW+Lo5vK8PaS9ENMiM/U5z H4/NrrI94rA+5wyBYm68udIec7mNR9WfeIaiwU3XZEvp4s/lN8PDuQ7/9yjeG9LVsd08 OL7ez2ugjvPOgLqFhdWR9vMsBFWFlZBjXzQ2kQCTktHoEmk8PucDsJEaa0Abjwu/+TQx pVtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Sz1A1OLl4QcaQGX4xosKw+1RxVwaK5Ff6lkJu1391iQ=; b=OhBwQmVn86yzFUPVSzw5Lyi1EOqgcaM6HpE0WBJvpsSSBnhXRL0siKlBTwGse6EUTk RlrRLUGyppywsH48lG5/7g2Z6D6s5YdJkpuKfIdx3ad9VUsq0uN8MXzf7X4tqhtL48ZJ E+VuMRnsub4QCbq61BZK0fE1okhs0p0Nv3u1QMwnk5UtzvxOh9/ya6GQqW7O8PSwvk6b HbP6I3RDox2k7iFzvijElmg2c6dA7kqjBR8nVTdTjZgW57f8KcYOv8jSfTFBNQAO31Ay W0UiPwWDfa8kvBi44QVNe6gd+16YQJ3paZ5yvg5x9PuZc9ULyMo5uBdea+d8ia2p7vE2 +bdw== X-Gm-Message-State: AOAM5338B15j6rEKs0rXTEiuPx/MG9fmN7XX1gf6kLtrgFGPIFGRlEqA CCvczmkFBkdPjfs5Q2sh/Os= X-Google-Smtp-Source: ABdhPJw5BSpRPMWnE6DVNXUJW9LF6rkaIYF72DPl5+KJUJVh5e4Vw/WMGqY1gfX7mJI3wHQoSea/1g== X-Received: by 2002:a63:d84b:: with SMTP id k11mr828791pgj.372.1626310470633; Wed, 14 Jul 2021 17:54:30 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([2620:10d:c090:400::5:120c]) by smtp.gmail.com with ESMTPSA id nl2sm3439011pjb.10.2021.07.14.17.54.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jul 2021 17:54:30 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 bpf-next 05/11] bpf: Prevent pointer mismatch in bpf_timer_init. Date: Wed, 14 Jul 2021 17:54:11 -0700 Message-Id: <20210715005417.78572-6-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210715005417.78572-1-alexei.starovoitov@gmail.com> References: <20210715005417.78572-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexei Starovoitov bpf_timer_init() arguments are: 1. pointer to a timer (which is embedded in map element). 2. pointer to a map. Make sure that pointer to a timer actually belongs to that map. Use map_uid (which is unique id of inner map) to reject: inner_map1 = bpf_map_lookup_elem(outer_map, key1) inner_map2 = bpf_map_lookup_elem(outer_map, key2) if (inner_map1 && inner_map2) { timer = bpf_map_lookup_elem(inner_map1); if (timer) // mismatch would have been allowed bpf_timer_init(timer, inner_map2); } Signed-off-by: Alexei Starovoitov Acked-by: Martin KaFai Lau Acked-by: Andrii Nakryiko --- include/linux/bpf_verifier.h | 9 ++++++++- kernel/bpf/verifier.c | 31 ++++++++++++++++++++++++++++--- 2 files changed, 36 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index e774ecc1cd1f..5d3169b57e6e 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -53,7 +53,14 @@ struct bpf_reg_state { /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE | * PTR_TO_MAP_VALUE_OR_NULL */ - struct bpf_map *map_ptr; + struct { + struct bpf_map *map_ptr; + /* To distinguish map lookups from outer map + * the map_uid is non-zero for registers + * pointing to inner maps. + */ + u32 map_uid; + }; /* for PTR_TO_BTF_ID */ struct { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e44c36107d11..cb393de3c818 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -255,6 +255,7 @@ struct bpf_call_arg_meta { int mem_size; u64 msize_max_value; int ref_obj_id; + int map_uid; int func_id; struct btf *btf; u32 btf_id; @@ -1135,6 +1136,10 @@ static void mark_ptr_not_null_reg(struct bpf_reg_state *reg) if (map->inner_map_meta) { reg->type = CONST_PTR_TO_MAP; reg->map_ptr = map->inner_map_meta; + /* transfer reg's id which is unique for every map_lookup_elem + * as UID of the inner map. + */ + reg->map_uid = reg->id; } else if (map->map_type == BPF_MAP_TYPE_XSKMAP) { reg->type = PTR_TO_XDP_SOCK; } else if (map->map_type == BPF_MAP_TYPE_SOCKMAP || @@ -4708,6 +4713,7 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, verbose(env, "verifier bug. Two map pointers in a timer helper\n"); return -EFAULT; } + meta->map_uid = reg->map_uid; meta->map_ptr = map; return 0; } @@ -5006,11 +5012,29 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, if (arg_type == ARG_CONST_MAP_PTR) { /* bpf_map_xxx(map_ptr) call: remember that map_ptr */ - if (meta->map_ptr && meta->map_ptr != reg->map_ptr) { - verbose(env, "Map pointer doesn't match bpf_timer.\n"); - return -EINVAL; + if (meta->map_ptr) { + /* Use map_uid (which is unique id of inner map) to reject: + * inner_map1 = bpf_map_lookup_elem(outer_map, key1) + * inner_map2 = bpf_map_lookup_elem(outer_map, key2) + * if (inner_map1 && inner_map2) { + * timer = bpf_map_lookup_elem(inner_map1); + * if (timer) + * // mismatch would have been allowed + * bpf_timer_init(timer, inner_map2); + * } + * + * Comparing map_ptr is enough to distinguish normal and outer maps. + */ + if (meta->map_ptr != reg->map_ptr || + meta->map_uid != reg->map_uid) { + verbose(env, + "timer pointer in R1 map_uid=%d doesn't match map pointer in R2 map_uid=%d\n", + meta->map_uid, reg->map_uid); + return -EINVAL; + } } meta->map_ptr = reg->map_ptr; + meta->map_uid = reg->map_uid; } else if (arg_type == ARG_PTR_TO_MAP_KEY) { /* bpf_map_xxx(..., map_ptr, ..., key) call: * check that [key, key + map->key_size) are within @@ -6204,6 +6228,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn return -EINVAL; } regs[BPF_REG_0].map_ptr = meta.map_ptr; + regs[BPF_REG_0].map_uid = meta.map_uid; if (fn->ret_type == RET_PTR_TO_MAP_VALUE) { regs[BPF_REG_0].type = PTR_TO_MAP_VALUE; if (map_value_has_spin_lock(meta.map_ptr)) From patchwork Thu Jul 15 00:54:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 477880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 419DEC47E4B for ; Thu, 15 Jul 2021 00:54:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2828B610C7 for ; Thu, 15 Jul 2021 00:54:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233634AbhGOA5l (ORCPT ); Wed, 14 Jul 2021 20:57:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231301AbhGOA5Y (ORCPT ); Wed, 14 Jul 2021 20:57:24 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBB57C06175F; Wed, 14 Jul 2021 17:54:32 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id 21so3568906pfp.3; Wed, 14 Jul 2021 17:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FVHofXjsSn+YToG5Mlz3X8cSwYGdWIyhL0CMSeE/g7s=; b=Ly7vuIixfpgcQpooe+AoYSlZZsn0KkQxd6czKtFYQGhqOiGU0zuUVzDiD04g9vXgj/ Nr4ZhM/T97oevhsYrwTDfyXA8/LLR/EnyMGlhrTU3Nkf68AnqEEq2qJW22zGqGIkvcs1 NbIBPkzRl1BodwdKJBzRQxdw84LA+XZcpEplDhPJnIkju7avv5sNW8xrXsG3abSi9NnZ pCzHZwdjJqvAJRfwQdr9afKzpipCL5p+PNrWhRKpa6FtQEI0WDGOCYr1QLRvDFExEuAi +PCGpnJphjcJIE9nYlCPyDW4a5OY+ebdmnRk2OYkeZEki38t/96v4fmUXv9/mT3uUVTp QkEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FVHofXjsSn+YToG5Mlz3X8cSwYGdWIyhL0CMSeE/g7s=; b=jy6s2DWILUm113m2Uh0pV3khL/YBdGJUZdDxQj5LtQi41X3GWQEzEIpN/0GpV2fAND c+Cj6kvYhzQNkV2x33nd04tK1G8apVfDs+yBTsqjaSdsDTeMGjfKaW6EZ/Ut1DKkauQn gcS+QsnpGtmqA+KkqVoyrBRYyvlTL9uYGaU0DcHz2cKYutT6sKefxX+G9EPYoYDcfXYk ljofct4BzAZ8+eQVw9Rm+QCi40Xeur9mMAPsH3u6wTICRYS95ZVJvgEVUynON3QWtv0Z lb3a+K/wak3bd7kFBNgl3EI/XUPPgiiVBcO677A/H3Cr6lu5gA8Z0uB3G/I/a81ERMCC WEmg== X-Gm-Message-State: AOAM5303MX553TqlT7P7JQE/ViVEZ0RW3NDuXRidhRYiiIhCTbn1Xf6Z J7chfJxGCx0jBgnRxE/AHJQ= X-Google-Smtp-Source: ABdhPJzbELTcrg/1dIokAZRVdMAhksOzogxTUW9oT3kfxIigXq6XRmpiVk5SxCzn+RRAJlRqdu67WQ== X-Received: by 2002:a63:4c0e:: with SMTP id z14mr809635pga.427.1626310472277; Wed, 14 Jul 2021 17:54:32 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([2620:10d:c090:400::5:120c]) by smtp.gmail.com with ESMTPSA id nl2sm3439011pjb.10.2021.07.14.17.54.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jul 2021 17:54:31 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 bpf-next 06/11] bpf: Remember BTF of inner maps. Date: Wed, 14 Jul 2021 17:54:12 -0700 Message-Id: <20210715005417.78572-7-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210715005417.78572-1-alexei.starovoitov@gmail.com> References: <20210715005417.78572-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexei Starovoitov BTF is required for 'struct bpf_timer' to be recognized inside map value. The bpf timers are supported inside inner maps. Remember 'struct btf *' in inner_map_meta to make it available to the verifier in the sequence: struct bpf_map *inner_map = bpf_map_lookup_elem(&outer_map, ...); if (inner_map) timer = bpf_map_lookup_elem(&inner_map, ...); Signed-off-by: Alexei Starovoitov Acked-by: Yonghong Song Acked-by: Martin KaFai Lau Acked-by: Andrii Nakryiko --- kernel/bpf/map_in_map.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c index 890dfe14e731..5cd8f5277279 100644 --- a/kernel/bpf/map_in_map.c +++ b/kernel/bpf/map_in_map.c @@ -3,6 +3,7 @@ */ #include #include +#include #include "map_in_map.h" @@ -51,6 +52,10 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) inner_map_meta->max_entries = inner_map->max_entries; inner_map_meta->spin_lock_off = inner_map->spin_lock_off; inner_map_meta->timer_off = inner_map->timer_off; + if (inner_map->btf) { + btf_get(inner_map->btf); + inner_map_meta->btf = inner_map->btf; + } /* Misc members not needed in bpf_map_meta_equal() check. */ inner_map_meta->ops = inner_map->ops; @@ -66,6 +71,7 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) void bpf_map_meta_free(struct bpf_map *map_meta) { + btf_put(map_meta->btf); kfree(map_meta); } From patchwork Thu Jul 15 00:54:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 477882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 093B1C47E48 for ; Thu, 15 Jul 2021 00:54:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5E5160240 for ; Thu, 15 Jul 2021 00:54:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231196AbhGOA5n (ORCPT ); Wed, 14 Jul 2021 20:57:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231407AbhGOA5e (ORCPT ); Wed, 14 Jul 2021 20:57:34 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2064BC06175F; Wed, 14 Jul 2021 17:54:40 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id jx7-20020a17090b46c7b02901757deaf2c8so2635314pjb.0; Wed, 14 Jul 2021 17:54:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uTcqVnrCO3XM3jRnf5yi8Pff04+jI6DQiFGQ9+2dshY=; b=ljEUWCpRhC9Wv2oG0G5cFn7F3T1K8ZpBW2oRRTYFe71mZKwt6A8kZ+8XvZq1I3fqkD D46LtvJMnAoOdWZW1RvhRQN/nwHfLSRiBMji2eeKoYZv25mCXisuR7M8s0YNv4Ba25NN fuhjOt+UvI1QwrMVoC50S6pRZjo0kB5DKMdEoDcxdreV3p7h7cTK85CtcL88zNjss66R 5gXmTeU1OtlSmt45eMs+JYvo/kRasYTHe1eg8yWGtloxTkh74ChJghxsipeOJGYaYrvZ VeP1jXIFMMZb/LV2sF+Hhg280eGIBpB2hVSO4DhAHLRNVguBUqC86ZJqqjkdLnx3Wgim BSsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uTcqVnrCO3XM3jRnf5yi8Pff04+jI6DQiFGQ9+2dshY=; b=QJBWMa8t3LerUepo0l+6WeXJGeWdHLXAfDZOJlCFBWghgyjY/vYj5SQaikVG56D3Bn l3qOqkrJ7yuFVrIc0CNN9j8o+C7coAZL4U3zaJ2XzVmm2Du/gjzf01xcG7VjkFx27pjl V45GIds+CSvTAkz2X19De67ESXE4AOLyp/95M9ts1QTk0n73hAmPXEwoLD1689yF5kaC YJofAz3hwB7Q1F0AtE4QGgRVhuZkx3gAjU3lhllq7pOqLdDm82oE2qYtQLR21XsSmqSQ cFfoo34EXNjJVDWeDDxP0mJ1yg0qX12A3HVHYjT2+sE+MWU/DKK8t1ewZGLt9rhmtaYP 8EqA== X-Gm-Message-State: AOAM532va9I4qZaWECU/5/SLrXhkSvsI8RVN4aDU/712r8Xm+q65DkOo I7pPEts5mI+NeRj17m1XXiw= X-Google-Smtp-Source: ABdhPJxBL4fhQh3zaY14Y9fn3qLmJDPHFqg2B+wfE8Uem4GPWbn9Tk8+TeniQ/TpH/pWe3Fh6SnYNg== X-Received: by 2002:a17:902:760c:b029:129:2dde:f8c9 with SMTP id k12-20020a170902760cb02901292ddef8c9mr756383pll.41.1626310479602; Wed, 14 Jul 2021 17:54:39 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([2620:10d:c090:400::5:120c]) by smtp.gmail.com with ESMTPSA id nl2sm3439011pjb.10.2021.07.14.17.54.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jul 2021 17:54:39 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 bpf-next 08/11] bpf: Implement verifier support for validation of async callbacks. Date: Wed, 14 Jul 2021 17:54:14 -0700 Message-Id: <20210715005417.78572-9-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210715005417.78572-1-alexei.starovoitov@gmail.com> References: <20210715005417.78572-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexei Starovoitov bpf_for_each_map_elem() and bpf_timer_set_callback() helpers are relying on PTR_TO_FUNC infra in the verifier to validate addresses to subprograms and pass them into the helpers as function callbacks. In case of bpf_for_each_map_elem() the callback is invoked synchronously and the verifier treats it as a normal subprogram call by adding another bpf_func_state and new frame in __check_func_call(). bpf_timer_set_callback() doesn't invoke the callback directly. The subprogram will be called asynchronously from bpf_timer_cb(). Teach the verifier to validate such async callbacks as special kind of jump by pushing verifier state into stack and let pop_stack() process it. Special care needs to be taken during state pruning. The call insn doing bpf_timer_set_callback has to be a prune_point. Otherwise short timer callbacks might not have prune points in front of bpf_timer_set_callback() which means is_state_visited() will be called after this call insn is processed in __check_func_call(). Which means that another async_cb state will be pushed to be walked later and the verifier will eventually hit BPF_COMPLEXITY_LIMIT_JMP_SEQ limit. Since push_async_cb() looks like another push_stack() branch the infinite loop detection will trigger false positive. To recognize this case mark such states as in_async_callback_fn. To distinguish infinite loop in async callback vs the same callback called with different arguments for different map and timer add async_entry_cnt to bpf_func_state. Enforce return zero from async callbacks. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpf_verifier.h | 9 ++- kernel/bpf/helpers.c | 8 +-- kernel/bpf/verifier.c | 123 ++++++++++++++++++++++++++++++++++- 3 files changed, 131 insertions(+), 9 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 5d3169b57e6e..242d0b1a0772 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -208,12 +208,19 @@ struct bpf_func_state { * zero == main subprog */ u32 subprogno; + /* Every bpf_timer_start will increment async_entry_cnt. + * It's used to distinguish: + * void foo(void) { for(;;); } + * void foo(void) { bpf_timer_set_callback(,foo); } + */ + u32 async_entry_cnt; + bool in_callback_fn; + bool in_async_callback_fn; /* The following fields should be last. See copy_func_state() */ int acquired_refs; struct bpf_reference_state *refs; int allocated_stack; - bool in_callback_fn; struct bpf_stack_state *stack; }; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 74b16593983d..9fe846ec6bd1 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1043,7 +1043,6 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) void *callback_fn; void *key; u32 idx; - int ret; callback_fn = rcu_dereference_check(t->callback_fn, rcu_read_lock_bh_held()); if (!callback_fn) @@ -1066,10 +1065,9 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) key = value - round_up(map->key_size, 8); } - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, - (u64)(long)key, - (u64)(long)value, 0, 0); - WARN_ON(ret != 0); /* Next patch moves this check into the verifier */ + BPF_CAST_CALL(callback_fn)((u64)(long)map, (u64)(long)key, + (u64)(long)value, 0, 0); + /* The verifier checked that return value is zero. */ this_cpu_write(hrtimer_running, NULL); out: diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1511f92b4cf4..ab6ce598a652 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -735,6 +735,10 @@ static void print_verifier_state(struct bpf_verifier_env *env, if (state->refs[i].id) verbose(env, ",%d", state->refs[i].id); } + if (state->in_callback_fn) + verbose(env, " cb"); + if (state->in_async_callback_fn) + verbose(env, " async_cb"); verbose(env, "\n"); } @@ -1527,6 +1531,54 @@ static void init_func_state(struct bpf_verifier_env *env, init_reg_state(env, state); } +/* Similar to push_stack(), but for async callbacks */ +static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env, + int insn_idx, int prev_insn_idx, + int subprog) +{ + struct bpf_verifier_stack_elem *elem; + struct bpf_func_state *frame; + + elem = kzalloc(sizeof(struct bpf_verifier_stack_elem), GFP_KERNEL); + if (!elem) + goto err; + + elem->insn_idx = insn_idx; + elem->prev_insn_idx = prev_insn_idx; + elem->next = env->head; + elem->log_pos = env->log.len_used; + env->head = elem; + env->stack_size++; + if (env->stack_size > BPF_COMPLEXITY_LIMIT_JMP_SEQ) { + verbose(env, + "The sequence of %d jumps is too complex for async cb.\n", + env->stack_size); + goto err; + } + /* Unlike push_stack() do not copy_verifier_state(). + * The caller state doesn't matter. + * This is async callback. It starts in a fresh stack. + * Initialize it similar to do_check_common(). + */ + elem->st.branches = 1; + frame = kzalloc(sizeof(*frame), GFP_KERNEL); + if (!frame) + goto err; + init_func_state(env, frame, + BPF_MAIN_FUNC /* callsite */, + 0 /* frameno within this callchain */, + subprog /* subprog number within this prog */); + elem->st.frame[0] = frame; + return &elem->st; +err: + free_verifier_state(env->cur_state, true); + env->cur_state = NULL; + /* pop all elements and return */ + while (!pop_stack(env, NULL, NULL, false)); + return NULL; +} + + enum reg_arg_type { SRC_OP, /* register is used as source operand */ DST_OP, /* register is used as destination operand */ @@ -5704,6 +5756,30 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn } } + if (insn->code == (BPF_JMP | BPF_CALL) && + insn->imm == BPF_FUNC_timer_set_callback) { + struct bpf_verifier_state *async_cb; + + /* there is no real recursion here. timer callbacks are async */ + async_cb = push_async_cb(env, env->subprog_info[subprog].start, + *insn_idx, subprog); + if (!async_cb) + return -EFAULT; + callee = async_cb->frame[0]; + callee->async_entry_cnt = caller->async_entry_cnt + 1; + + /* Convert bpf_timer_set_callback() args into timer callback args */ + err = set_callee_state_cb(env, caller, callee, *insn_idx); + if (err) + return err; + + clear_caller_saved_regs(env, caller->regs); + mark_reg_unknown(env, caller->regs, BPF_REG_0); + caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; + /* continue with next insn after call */ + return 0; + } + callee = kzalloc(sizeof(*callee), GFP_KERNEL); if (!callee) return -ENOMEM; @@ -5856,6 +5932,7 @@ static int set_timer_callback_state(struct bpf_verifier_env *env, /* unused */ __mark_reg_not_init(env, &callee->regs[BPF_REG_4]); __mark_reg_not_init(env, &callee->regs[BPF_REG_5]); + callee->in_async_callback_fn = true; return 0; } @@ -9224,7 +9301,8 @@ static int check_return_code(struct bpf_verifier_env *env) struct tnum range = tnum_range(0, 1); enum bpf_prog_type prog_type = resolve_prog_type(env->prog); int err; - const bool is_subprog = env->cur_state->frame[0]->subprogno; + struct bpf_func_state *frame = env->cur_state->frame[0]; + const bool is_subprog = frame->subprogno; /* LSM and struct_ops func-ptr's return type could be "void" */ if (!is_subprog && @@ -9249,6 +9327,22 @@ static int check_return_code(struct bpf_verifier_env *env) } reg = cur_regs(env) + BPF_REG_0; + + if (frame->in_async_callback_fn) { + /* enforce return zero from async callbacks like timer */ + if (reg->type != SCALAR_VALUE) { + verbose(env, "In async callback the register R0 is not a known value (%s)\n", + reg_type_str[reg->type]); + return -EINVAL; + } + + if (!tnum_in(tnum_const(0), reg->var_off)) { + verbose_invalid_scalar(env, reg, &range, "async callback", "R0"); + return -EINVAL; + } + return 0; + } + if (is_subprog) { if (reg->type != SCALAR_VALUE) { verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n", @@ -9496,6 +9590,13 @@ static int visit_insn(int t, int insn_cnt, struct bpf_verifier_env *env) return DONE_EXPLORING; case BPF_CALL: + if (insns[t].imm == BPF_FUNC_timer_set_callback) + /* Mark this call insn to trigger is_state_visited() check + * before call itself is processed by __check_func_call(). + * Otherwise new async state will be pushed for further + * exploration. + */ + init_explored_state(env, t); return visit_func_call_insn(t, insn_cnt, insns, env, insns[t].src_reg == BPF_PSEUDO_CALL); @@ -10503,9 +10604,25 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) states_cnt++; if (sl->state.insn_idx != insn_idx) goto next; + if (sl->state.branches) { - if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur)) { + struct bpf_func_state *frame = sl->state.frame[sl->state.curframe]; + + if (frame->in_async_callback_fn && + frame->async_entry_cnt != cur->frame[cur->curframe]->async_entry_cnt) { + /* Different async_entry_cnt means that the verifier is + * processing another entry into async callback. + * Seeing the same state is not an indication of infinite + * loop or infinite recursion. + * But finding the same state doesn't mean that it's safe + * to stop processing the current state. The previous state + * hasn't yet reached bpf_exit, since state.branches > 0. + * Checking in_async_callback_fn alone is not enough either. + * Since the verifier still needs to catch infinite loops + * inside async callbacks. + */ + } else if (states_maybe_looping(&sl->state, cur) && + states_equal(env, &sl->state, cur)) { verbose_linfo(env, insn_idx, "; "); verbose(env, "infinite loop detected at insn %d\n", insn_idx); return -EINVAL; From patchwork Thu Jul 15 00:54:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 477879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51447C47E48 for ; Thu, 15 Jul 2021 00:56:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A1F6613DB for ; Thu, 15 Jul 2021 00:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231374AbhGOA6x (ORCPT ); Wed, 14 Jul 2021 20:58:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234854AbhGOA5r (ORCPT ); Wed, 14 Jul 2021 20:57:47 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 452F9C0613DB; Wed, 14 Jul 2021 17:54:45 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id v18-20020a17090ac912b0290173b9578f1cso4589318pjt.0; Wed, 14 Jul 2021 17:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ExLeSasiWwv4CEcR+W6VzfbWkz79BcYXyjk/5vncpkg=; b=nStsswhRCHVI3x29In6oJn4zO49J4MFzXZpTzCuznQCf+/HAvnwxTWcAaTOOPa0w2z Z1NtYmJghhMFjQC/u4HfrAE6i2kuRgbb1NFwpEJNWubUGc03dKrGipld5d6sUxVDA+Gv frBD8MWZdlFxKSaWH9s3rRkO1GwGKSmzTcZX56AI6leQZdEvYNRhVp7QpD9hgIXnOZnq 3PrMLPl/ZH3p05vQn2KjKR1vOdRFx6RY/94M1cTmOUgB9kacfiD+NDxHA2ymRmElCGNp Sl7/60VNVI3TEbkO05E20iCc5brTgHqUkdWvIp7wDG/EaS5a8/io8VE8PD+SWC9vHp2b Iq0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ExLeSasiWwv4CEcR+W6VzfbWkz79BcYXyjk/5vncpkg=; b=SDq1bBpZV8JBT6dhS68iCahJSzsyFP1IflEKe1ym5kqTRsZlQYqfcwwRBfKsThNW8o BGQz7I32u8udaI8pAGPL1hoHq3BYyMj3giInoEQdkE1ZIIfLbKIcNc/vfG7VTOqXsGcT nacB3GWihSHzDDRjJbwil7QfPA4oCrgx/gLKIZy4pvnU8O8cuJDk4Sq4bsaTANoIalMr z4UBHjRIrYPPjjSMJ1WBTtaUlnxso0DmEZcht9Fr1lDkGT54pLCnVIHgIlyjXpQY0ucF /PdMldhopFCqUXVv4OlzL+J1fqwKphTfuynv45gsJBHRy34aZL5AsqSuvQVq8SZo4exm kYpQ== X-Gm-Message-State: AOAM531ifWBsRxCmyW9P1KNMn7ftsaIB2WY+L9cbrb6eeA419W5hjc7u HbARrRfI2TXdNjNDBfNrUFk= X-Google-Smtp-Source: ABdhPJyjLjTLVra+OLdRHRmtKMK34XvxzIV9U+nxLDKa/65pFytzxnzktUjdeofHQMkVSRWobsKF7g== X-Received: by 2002:a17:903:186:b029:129:5733:85c8 with SMTP id z6-20020a1709030186b0290129573385c8mr604286plg.39.1626310484769; Wed, 14 Jul 2021 17:54:44 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([2620:10d:c090:400::5:120c]) by smtp.gmail.com with ESMTPSA id nl2sm3439011pjb.10.2021.07.14.17.54.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jul 2021 17:54:44 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 bpf-next 11/11] selftests/bpf: Add a test with bpf_timer in inner map. Date: Wed, 14 Jul 2021 17:54:17 -0700 Message-Id: <20210715005417.78572-12-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210715005417.78572-1-alexei.starovoitov@gmail.com> References: <20210715005417.78572-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexei Starovoitov Check that map-in-map supports bpf timers. Check that indirect "recursion" of timer callbacks works: timer_cb1() { bpf_timer_set_callback(timer_cb2); } timer_cb2() { bpf_timer_set_callback(timer_cb1); } Check that bpf_map_release htab_free_prealloced_timers bpf_timer_cancel_and_free hrtimer_cancel works while timer cb is running. "while true; do ./test_progs -t timer_mim; done" is a great stress test. It caught missing timer cancel in htab->extra_elems. timer_mim_reject.c is a negative test that checks that timer<->map mismatch is prevented. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- .../selftests/bpf/prog_tests/timer_mim.c | 69 +++++++++++++++ tools/testing/selftests/bpf/progs/timer_mim.c | 88 +++++++++++++++++++ .../selftests/bpf/progs/timer_mim_reject.c | 74 ++++++++++++++++ 3 files changed, 231 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/timer_mim.c create mode 100644 tools/testing/selftests/bpf/progs/timer_mim.c create mode 100644 tools/testing/selftests/bpf/progs/timer_mim_reject.c diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c new file mode 100644 index 000000000000..f5acbcbe33a4 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c @@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include "timer_mim.skel.h" +#include "timer_mim_reject.skel.h" + +static int timer_mim(struct timer_mim *timer_skel) +{ + __u32 duration = 0, retval; + __u64 cnt1, cnt2; + int err, prog_fd, key1 = 1; + + err = timer_mim__attach(timer_skel); + if (!ASSERT_OK(err, "timer_attach")) + return err; + + prog_fd = bpf_program__fd(timer_skel->progs.test1); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + ASSERT_OK(err, "test_run"); + ASSERT_EQ(retval, 0, "test_run"); + timer_mim__detach(timer_skel); + + /* check that timer_cb[12] are incrementing 'cnt' */ + cnt1 = READ_ONCE(timer_skel->bss->cnt); + usleep(200); /* 100 times more than interval */ + cnt2 = READ_ONCE(timer_skel->bss->cnt); + ASSERT_GT(cnt2, cnt1, "cnt"); + + ASSERT_EQ(timer_skel->bss->err, 0, "err"); + /* check that code paths completed */ + ASSERT_EQ(timer_skel->bss->ok, 1 | 2, "ok"); + + close(bpf_map__fd(timer_skel->maps.inner_htab)); + err = bpf_map_delete_elem(bpf_map__fd(timer_skel->maps.outer_arr), &key1); + ASSERT_EQ(err, 0, "delete inner map"); + + /* check that timer_cb[12] are no longer running */ + cnt1 = READ_ONCE(timer_skel->bss->cnt); + usleep(200); + cnt2 = READ_ONCE(timer_skel->bss->cnt); + ASSERT_EQ(cnt2, cnt1, "cnt"); + + return 0; +} + +void test_timer_mim(void) +{ + struct timer_mim_reject *timer_reject_skel = NULL; + libbpf_print_fn_t old_print_fn = NULL; + struct timer_mim *timer_skel = NULL; + int err; + + old_print_fn = libbpf_set_print(NULL); + timer_reject_skel = timer_mim_reject__open_and_load(); + libbpf_set_print(old_print_fn); + if (!ASSERT_ERR_PTR(timer_reject_skel, "timer_reject_skel_load")) + goto cleanup; + + timer_skel = timer_mim__open_and_load(); + if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load")) + goto cleanup; + + err = timer_mim(timer_skel); + ASSERT_OK(err, "timer_mim"); +cleanup: + timer_mim__destroy(timer_skel); + timer_mim_reject__destroy(timer_reject_skel); +} diff --git a/tools/testing/selftests/bpf/progs/timer_mim.c b/tools/testing/selftests/bpf/progs/timer_mim.c new file mode 100644 index 000000000000..2fee7ab105ef --- /dev/null +++ b/tools/testing/selftests/bpf/progs/timer_mim.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include +#include +#include +#include "bpf_tcp_helpers.h" + +char _license[] SEC("license") = "GPL"; +struct hmap_elem { + int pad; /* unused */ + struct bpf_timer timer; +}; + +struct inner_map { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1024); + __type(key, int); + __type(value, struct hmap_elem); +} inner_htab SEC(".maps"); + +#define ARRAY_KEY 1 +#define HASH_KEY 1234 + +struct outer_arr { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 2); + __uint(key_size, sizeof(int)); + __uint(value_size, sizeof(int)); + __array(values, struct inner_map); +} outer_arr SEC(".maps") = { + .values = { [ARRAY_KEY] = &inner_htab }, +}; + +__u64 err; +__u64 ok; +__u64 cnt; + +static int timer_cb1(void *map, int *key, struct hmap_elem *val); + +static int timer_cb2(void *map, int *key, struct hmap_elem *val) +{ + cnt++; + bpf_timer_set_callback(&val->timer, timer_cb1); + if (bpf_timer_start(&val->timer, 1000, 0)) + err |= 1; + ok |= 1; + return 0; +} + +/* callback for inner hash map */ +static int timer_cb1(void *map, int *key, struct hmap_elem *val) +{ + cnt++; + bpf_timer_set_callback(&val->timer, timer_cb2); + if (bpf_timer_start(&val->timer, 1000, 0)) + err |= 2; + /* Do a lookup to make sure 'map' and 'key' pointers are correct */ + bpf_map_lookup_elem(map, key); + ok |= 2; + return 0; +} + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(test1, int a) +{ + struct hmap_elem init = {}; + struct bpf_map *inner_map; + struct hmap_elem *val; + int array_key = ARRAY_KEY; + int hash_key = HASH_KEY; + + inner_map = bpf_map_lookup_elem(&outer_arr, &array_key); + if (!inner_map) + return 0; + + bpf_map_update_elem(inner_map, &hash_key, &init, 0); + val = bpf_map_lookup_elem(inner_map, &hash_key); + if (!val) + return 0; + + bpf_timer_init(&val->timer, inner_map, CLOCK_MONOTONIC); + if (bpf_timer_set_callback(&val->timer, timer_cb1)) + err |= 4; + if (bpf_timer_start(&val->timer, 0, 0)) + err |= 8; + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/timer_mim_reject.c b/tools/testing/selftests/bpf/progs/timer_mim_reject.c new file mode 100644 index 000000000000..5d648e3d8a41 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/timer_mim_reject.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include +#include +#include +#include "bpf_tcp_helpers.h" + +char _license[] SEC("license") = "GPL"; +struct hmap_elem { + int pad; /* unused */ + struct bpf_timer timer; +}; + +struct inner_map { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1024); + __type(key, int); + __type(value, struct hmap_elem); +} inner_htab SEC(".maps"); + +#define ARRAY_KEY 1 +#define ARRAY_KEY2 2 +#define HASH_KEY 1234 + +struct outer_arr { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 2); + __uint(key_size, sizeof(int)); + __uint(value_size, sizeof(int)); + __array(values, struct inner_map); +} outer_arr SEC(".maps") = { + .values = { [ARRAY_KEY] = &inner_htab }, +}; + +__u64 err; +__u64 ok; +__u64 cnt; + +/* callback for inner hash map */ +static int timer_cb(void *map, int *key, struct hmap_elem *val) +{ + return 0; +} + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(test1, int a) +{ + struct hmap_elem init = {}; + struct bpf_map *inner_map, *inner_map2; + struct hmap_elem *val; + int array_key = ARRAY_KEY; + int array_key2 = ARRAY_KEY2; + int hash_key = HASH_KEY; + + inner_map = bpf_map_lookup_elem(&outer_arr, &array_key); + if (!inner_map) + return 0; + + inner_map2 = bpf_map_lookup_elem(&outer_arr, &array_key2); + if (!inner_map2) + return 0; + bpf_map_update_elem(inner_map, &hash_key, &init, 0); + val = bpf_map_lookup_elem(inner_map, &hash_key); + if (!val) + return 0; + + bpf_timer_init(&val->timer, inner_map2, CLOCK_MONOTONIC); + if (bpf_timer_set_callback(&val->timer, timer_cb)) + err |= 4; + if (bpf_timer_start(&val->timer, 0, 0)) + err |= 8; + return 0; +}