From patchwork Wed May 6 00:03:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 219835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91FDDC47254 for ; Wed, 6 May 2020 00:03:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67E1220721 for ; Wed, 6 May 2020 00:03:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cs.washington.edu header.i=@cs.washington.edu header.b="YYYhuVfw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729786AbgEFADr (ORCPT ); Tue, 5 May 2020 20:03:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729663AbgEFADd (ORCPT ); Tue, 5 May 2020 20:03:33 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CEAAC061A10 for ; Tue, 5 May 2020 17:03:33 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id e6so4337pjt.4 for ; Tue, 05 May 2020 17:03:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tu4wfIJHIQQHHdr2oULlj2uQlQex1349SVlpEUwQDaA=; b=YYYhuVfwies2v08OAprLyffNxDwOzCKJiip46cwKDTes0snI5kI1fVXAt1v/DbyQbq qqyIuITgLNQFp7znZJSNTLBz31BDZ2NDzwY487Q4RBlX5Avypqf06V/om1/Cwl6WM9/f RqZY+PdRhztp2kb1+kPiL5NQXx0HtDwlCc7LQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tu4wfIJHIQQHHdr2oULlj2uQlQex1349SVlpEUwQDaA=; b=ByesI9xhPwJKKuul7wAEwHzmMGvaFngQ24U2ld7xdEKDLOoM+S3oIy43ciaIT65/0Z diySiOqk2SqH6hTUeiXXNIwEw36FSoyA9YmMUOcrv+jSE954nDlBveHgMNz770eC1IGq WYP/l84nqY4ucfMX8sa46QqorrFhlfSWa7lDpxZIwtgB2rv6Tz+LmkYiR5EiBTy6F2d2 ZLnqk/nDrQwBqknpzDcBPyJYJyB1b+lUkEWM7H3WVCOLYMys79yKKcYhhuze012yrdKC hdFkMk0wTK/7oGDNYPdM7jN/+Dpf+DQ/FTfXTzREg0BM2Ht4wvm5q6mNmbdPWX/ShR4G V9kg== X-Gm-Message-State: AGi0PuYwpZa7OxkDmsEZmWiV1q+4hfs7J5Vveap8VSJfgQkakj4hVYRJ VJyKgJJYUznnkc4m+M4EVqEi8A== X-Google-Smtp-Source: APiQypIwBdTOVciQ6JxnzjjE9tj9R8b2rWOeA29u6dvkgY6D7Tjhk6Ie+E/Rg01PCw6tols8N/uxIA== X-Received: by 2002:a17:902:b711:: with SMTP id d17mr5341360pls.333.1588723412695; Tue, 05 May 2020 17:03:32 -0700 (PDT) Received: from localhost.localdomain (c-73-53-94-119.hsd1.wa.comcast.net. [73.53.94.119]) by smtp.gmail.com with ESMTPSA id u3sm133912pfn.217.2020.05.05.17.03.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 May 2020 17:03:32 -0700 (PDT) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , netdev@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next 2/4] bpf, riscv: Optimize FROM_LE using verifier_zext on RV64 Date: Tue, 5 May 2020 17:03:18 -0700 Message-Id: <20200506000320.28965-3-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200506000320.28965-1-luke.r.nels@gmail.com> References: <20200506000320.28965-1-luke.r.nels@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds two optimizations for BPF_ALU BPF_END BPF_FROM_LE in the RV64 BPF JIT. First, it enables the verifier zero-extension optimization to avoid zero extension when imm == 32. Second, it avoids generating code for imm == 64, since it is equivalent to a no-op. Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson --- arch/riscv/net/bpf_jit_comp64.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index e2636902a74e..c3ce9a911b66 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -542,13 +542,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, /* dst = BSWAP##imm(dst) */ case BPF_ALU | BPF_END | BPF_FROM_LE: - { - int shift = 64 - imm; - - emit(rv_slli(rd, rd, shift), ctx); - emit(rv_srli(rd, rd, shift), ctx); + switch (imm) { + case 16: + emit(rv_slli(rd, rd, 48), ctx); + emit(rv_srli(rd, rd, 48), ctx); + break; + case 32: + if (!aux->verifier_zext) + emit_zext_32(rd, ctx); + break; + case 64: + /* Do nothing */ + break; + } break; - } + case BPF_ALU | BPF_END | BPF_FROM_BE: emit(rv_addi(RV_REG_T2, RV_REG_ZERO, 0), ctx); From patchwork Wed May 6 00:03:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 219836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 651DDC4725B for ; Wed, 6 May 2020 00:03:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44E6420721 for ; Wed, 6 May 2020 00:03:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cs.washington.edu header.i=@cs.washington.edu header.b="DGRBrU5l" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729744AbgEFADi (ORCPT ); Tue, 5 May 2020 20:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729700AbgEFADf (ORCPT ); Tue, 5 May 2020 20:03:35 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97AB1C061A0F for ; Tue, 5 May 2020 17:03:35 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id f15so244781plr.3 for ; Tue, 05 May 2020 17:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FX6e20EgkCKDRplcbN8VIX8Q54OU7fQsuv0YDgw22to=; b=DGRBrU5lGGOAfmrpZkIzPhRETbzDiQU82spA1HQTJz0PJGRxx2nnsDinlYJB08t5Gs N9qJ+YxSQDx6R8jEiQy/+pinOvUHPL0UQsiyoQHMaWua1KOwacCH4h3fA7qjmKdgQx60 egKbXI77B4sIMlPNVqIq+HeFM8XxaWfSoTxPs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FX6e20EgkCKDRplcbN8VIX8Q54OU7fQsuv0YDgw22to=; b=ufyU/dAtZUZBFmgXndGNDXgq47fEayyojhukG2apaStevTDiBs2xDHPTH40Rswud2B gDL7vxB9/Y/mTv1aZbbc5Y/Y+suIz/Qhej1R3BIc416IvYY8hl4g7GypiG5MXCqkck7S KSh1424A3CbIEV+JTb4NCDenw7d/sfK8OPIpmexyn3m3YiAhFIPBYwlzKldhOJlTIP/a 0wA+ktP4NzBPLXYPifFQgfbiQktVWNpC8jOT+70h2SvOpQ+X9N7aW7YrXRQlNLicrodW i9anfmVRxb7qi49VwVK0WZVbXL59OV0yznQBcfk7GcrCuPEs9kGxYk8r+Qe7IamGASjt Q5yQ== X-Gm-Message-State: AGi0PuY33CT7ZRhVM+uk/9SGUuSWkhBI0xca80NXrAgYOUnzgEhz4TXW aTQkqkw3WHIyZI/A8xJbLNBjSA== X-Google-Smtp-Source: APiQypLNx/JIySQJ9tf33R7EMx5nSMIBF/Nl3MLH+ZUPJbxEIkPuPEABvWRA4nvRflSAMZlcjS6SUw== X-Received: by 2002:a17:90a:7065:: with SMTP id f92mr3527567pjk.160.1588723414979; Tue, 05 May 2020 17:03:34 -0700 (PDT) Received: from localhost.localdomain (c-73-53-94-119.hsd1.wa.comcast.net. [73.53.94.119]) by smtp.gmail.com with ESMTPSA id u3sm133912pfn.217.2020.05.05.17.03.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 May 2020 17:03:34 -0700 (PDT) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , Paul Walmsley , Palmer Dabbelt , Albert Ou , netdev@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH bpf-next 4/4] bpf, riscv: Optimize BPF_JSET BPF_K using andi on RV64 Date: Tue, 5 May 2020 17:03:20 -0700 Message-Id: <20200506000320.28965-5-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200506000320.28965-1-luke.r.nels@gmail.com> References: <20200506000320.28965-1-luke.r.nels@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch optimizes BPF_JSET BPF_K by using a RISC-V andi instruction when the BPF immediate fits in 12 bits, instead of first loading the immediate to a temporary register. Examples of generated code with and without this optimization: BPF_JMP_IMM(BPF_JSET, R1, 2, 1) without optimization: 20: li t1,2 24: and t1,a0,t1 28: bnez t1,0x30 BPF_JMP_IMM(BPF_JSET, R1, 2, 1) with optimization: 20: andi t1,a0,2 24: bnez t1,0x2c BPF_JMP32_IMM(BPF_JSET, R1, 2, 1) without optimization: 20: li t1,2 24: mv t2,a0 28: slli t2,t2,0x20 2c: srli t2,t2,0x20 30: slli t1,t1,0x20 34: srli t1,t1,0x20 38: and t1,t2,t1 3c: bnez t1,0x44 BPF_JMP32_IMM(BPF_JSET, R1, 2, 1) with optimization: 20: andi t1,a0,2 24: bnez t1,0x2c In these examples, because the upper 32 bits of the sign-extended immediate are 0, BPF_JMP BPF_JSET and BPF_JMP32 BPF_JSET are equivalent and therefore the JIT produces identical code for them. Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson --- arch/riscv/net/bpf_jit_comp64.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index b07cef952019..6cfd164cbe88 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -792,8 +792,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_JMP32 | BPF_JSGE | BPF_K: case BPF_JMP | BPF_JSLE | BPF_K: case BPF_JMP32 | BPF_JSLE | BPF_K: - case BPF_JMP | BPF_JSET | BPF_K: - case BPF_JMP32 | BPF_JSET | BPF_K: rvoff = rv_offset(i, off, ctx); s = ctx->ninsns; if (imm) { @@ -813,15 +811,28 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, /* Adjust for extra insns */ rvoff -= (e - s) << 2; + emit_branch(BPF_OP(code), rd, rs, rvoff, ctx); + break; - if (BPF_OP(code) == BPF_JSET) { - /* Adjust for and */ - rvoff -= 4; - emit(rv_and(rs, rd, rs), ctx); - emit_branch(BPF_JNE, rs, RV_REG_ZERO, rvoff, ctx); + case BPF_JMP | BPF_JSET | BPF_K: + case BPF_JMP32 | BPF_JSET | BPF_K: + rvoff = rv_offset(i, off, ctx); + s = ctx->ninsns; + if (is_12b_int(imm)) { + emit(rv_andi(RV_REG_T1, rd, imm), ctx); } else { - emit_branch(BPF_OP(code), rd, rs, rvoff, ctx); + emit_imm(RV_REG_T1, imm, ctx); + emit(rv_and(RV_REG_T1, rd, RV_REG_T1), ctx); } + /* For jset32, we should clear the upper 32 bits of t1, but + * sign-extension is sufficient here and saves one instruction, + * as t1 is used only in comparison against zero. + */ + if (!is64 && imm < 0) + emit(rv_addiw(RV_REG_T1, RV_REG_T1, 0), ctx); + e = ctx->ninsns; + rvoff -= (e - s) << 2; + emit_branch(BPF_JNE, RV_REG_T1, RV_REG_ZERO, rvoff, ctx); break; /* function call */