From patchwork Sun May 3 02:54:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 220040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B879EC47254 for ; Sun, 3 May 2020 02:54:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 913832080C for ; Sun, 3 May 2020 02:54:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hQ9+oYfF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726777AbgECCyd (ORCPT ); Sat, 2 May 2020 22:54:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726737AbgECCyb (ORCPT ); Sat, 2 May 2020 22:54:31 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC307C061A0C for ; Sat, 2 May 2020 19:54:31 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id w15so17607625ybp.16 for ; Sat, 02 May 2020 19:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=R7YnmYs9d6wLoe6DurJmu54SpzbXezhXrrfeGosy7lk=; b=hQ9+oYfFm1GM1UGhbHoSumgLSSVzzh6Xna9a64tI0+B8ZZZJJD/O2jQqhYp22i6cFp 8gBmYTdSzhM33R7xaBpaEvuAgB9I40HfFoy0P6p+fjCZVd0mSb1f3xmNoLQHHWBeybsx N1U+ZXNpKYGCPcvjn6keEfVV9xJZ6wXCSlSFXhYAx28EdQy8giZU3Op80oYnyXP8VJo4 pC4zgV1jIzfEgWZacplnvIZHWaryfYJaTh7W2iE2n1pWncNgKj+LcrX563NmisCXeY4f rGKRmUDuANp0AOxrdJifHX5B/2uV3c0bf/HeKwWmv+Ckp2MRpbY+++sZqqi2ZPqyBm8R 6sJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=R7YnmYs9d6wLoe6DurJmu54SpzbXezhXrrfeGosy7lk=; b=cwYQvVhBtM1OYc1+s+UNg6SkdslWhIKs1xGfuqe8EWnpV9LW7UuJz1Jq3BW5w6YmwT RUF59t2WLB0kW//Asro7yjQWr31E99N9im7r72Dv0zU5SLIymY9v1DYCoszT3kAgyZdM 4bBJHSGH0E/3OnP/z3ZAg2/5A1ZAloAOCK9cbIOQandHnaxuYyrbz4Zw8tR9une93Eyr FPBWFhVAqrb/HzM4pOt9z92VtOwzLYeBbC5uP2kfvEiNeUsi0uJxLl4C7QAFVbfv2+PM shMUocusUHsq5BUWytxCZmJZP2Rdyi8gg469Nfir8pDmQyAmSek70oSzpB2Qtu5ArloD EAEQ== X-Gm-Message-State: AGi0PuamlCIpBWdZnbVChKJgply8qOimIJI92DnEA0bk+Blf7bkcMVS3 /hwV56Bq4Bkcr99Z8kXU88SM02F8OfqXDg== X-Google-Smtp-Source: APiQypI55qP+HDVkEMY4wjctJUcZtSP4lUYw6avp+Y8mXezACf8VszC3/nD0yb5cyokVmQk8T7d926yeojl3EQ== X-Received: by 2002:a5b:301:: with SMTP id j1mr16670692ybp.142.1588474470890; Sat, 02 May 2020 19:54:30 -0700 (PDT) Date: Sat, 2 May 2020 19:54:19 -0700 In-Reply-To: <20200503025422.219257-1-edumazet@google.com> Message-Id: <20200503025422.219257-3-edumazet@google.com> Mime-Version: 1.0 References: <20200503025422.219257-1-edumazet@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH net-next 2/5] net_sched: sch_fq: change fq_flow size/layout From: Eric Dumazet To: "David S . Miller" Cc: netdev , Eric Dumazet , Eric Dumazet Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org sizeof(struct fq_flow) is 112 bytes on 64bit arches. This means that half of them use two cache lines, but 50% use three cache lines. This patch adds cache line alignment, and makes sure that only the first cache line is touched by fq_enqueue(), which is more expensive that fq_dequeue() in general. Signed-off-by: Eric Dumazet --- net/sched/sch_fq.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 1649928fe2c1b7476050e5eee3c494c76d114c62..7a2b3195938ede3c14c37b90c9604185cfa3f651 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -66,6 +66,7 @@ static inline struct fq_skb_cb *fq_skb_cb(struct sk_buff *skb) * in linear list (head,tail), otherwise are placed in a rbtree (t_root). */ struct fq_flow { +/* First cache line : used in fq_gc(), fq_enqueue(), fq_dequeue() */ struct rb_root t_root; struct sk_buff *head; /* list of skbs for this flow : first skb */ union { @@ -74,14 +75,18 @@ struct fq_flow { }; struct rb_node fq_node; /* anchor in fq_root[] trees */ struct sock *sk; + u32 socket_hash; /* sk_hash */ int qlen; /* number of packets in flow queue */ + +/* Second cache line, used in fq_dequeue() */ int credit; - u32 socket_hash; /* sk_hash */ + /* 32bit hole on 64bit arches */ + struct fq_flow *next; /* next pointer in RR lists */ struct rb_node rate_node; /* anchor in q->delayed tree */ u64 time_next_packet; -}; +} ____cacheline_aligned_in_smp; struct fq_flow_head { struct fq_flow *first; From patchwork Sun May 3 02:54:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 220039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE355C3A5A9 for ; Sun, 3 May 2020 02:54:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8ADE72078E for ; Sun, 3 May 2020 02:54:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ddYwECl+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726792AbgECCyj (ORCPT ); Sat, 2 May 2020 22:54:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726737AbgECCyi (ORCPT ); Sat, 2 May 2020 22:54:38 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC97BC061A0C for ; Sat, 2 May 2020 19:54:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id s62so17982474ybs.0 for ; Sat, 02 May 2020 19:54:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WAnULMs9+JtBKLPpt2fdT9wencbeP+ie0ya5DBJ0CpI=; b=ddYwECl+ocLWI2dHJHhl1igDzjYkPUAlbtyNhpPmEi0Dyf7OTKzPaidjPmbVKDp476 1U0LuDiMqIVASZ5fWQR+HoR7+dkCl7/OmFpRr44R+6IcBJcbWoLSTEXahIOVHfP0k0c/ XAvOExt+OlPu5ObEf4JnPszIx4b7Jy8KxJ+e28bAp7KdNITkslf2niefaz7Lwbb4vgwU bVFkJIl67qkXBH1/EeDRfeS+F5xsjNUwST3CjySCD+u3mMRHRrAWrOvlVuMWQjcHf21D NWyFvVgKQKJ0q1qKu9P6/TomXhfT3gHb0h1lDXbT2gJK0A+a8FwmVOBuwj+yDzqc9HZV 4GLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WAnULMs9+JtBKLPpt2fdT9wencbeP+ie0ya5DBJ0CpI=; b=ueh3FcCpZWOpvE0CxvJPIH6GpjPIPR5pq0hppUQf9jDljfryd9DkUOUQaLZHk9iaD5 cnoEjSXupWJ91ssGFa0lJ1rTpaXZfT4ey398vd/udYjdyHX4pAwgXs/l7M4ooh761G6I E00+3l0XeyAnbSmnChUq++P93JH1HdfrSXCvvNi3fklg3m0pc2sxFveyqFXSAGkfFYwf mDGP+bmZIWoO8a2YDE6WvfzjpxkYYBB/zjXdh41qY+G4h61DvhPRrGuNvYWls/2AI4ld 95BWKgCjliwtwZ51dnMQcHiDiqso3djIiahcRKj/ythKFtL3M7fwajpDQQm9mFp6cN5y gWCQ== X-Gm-Message-State: AGi0Pua1J3IrUhSv1441zZMBhO9KxnwHEIkG2IMuwnfOYGhvKv17cAiI XH9vkb1B2UCaQo+hXJXn8cuz9A7+JNI6kA== X-Google-Smtp-Source: APiQypLyEvVUpz9VDAp7pk3DUCcUXTLAb6cgZ935swUm71LMQ+nfLeHtYyapqCglL2gOfBcyXbtb7iAvtED5Bw== X-Received: by 2002:a25:5f4a:: with SMTP id h10mr17689800ybm.372.1588474475900; Sat, 02 May 2020 19:54:35 -0700 (PDT) Date: Sat, 2 May 2020 19:54:21 -0700 In-Reply-To: <20200503025422.219257-1-edumazet@google.com> Message-Id: <20200503025422.219257-5-edumazet@google.com> Mime-Version: 1.0 References: <20200503025422.219257-1-edumazet@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH net-next 4/5] net_sched: sch_fq: do not call fq_peek() twice per packet From: Eric Dumazet To: "David S . Miller" Cc: netdev , Eric Dumazet , Eric Dumazet Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This refactors the code to not call fq_peek() from fq_dequeue_head() since the caller can provide the skb. Also rename fq_dequeue_head() to fq_dequeue_skb() because 'head' is a bit vague, given the skb could come from t_root rb-tree. Signed-off-by: Eric Dumazet --- net/sched/sch_fq.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 56e4f3c4380c517136b22862771f9899a7fd99f2..4a28f611edf0cd4ac7fb53fc1c2a4ba12060bf59 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -388,19 +388,17 @@ static void fq_erase_head(struct Qdisc *sch, struct fq_flow *flow, } } -/* remove one skb from head of flow queue */ -static struct sk_buff *fq_dequeue_head(struct Qdisc *sch, struct fq_flow *flow) +/* Remove one skb from flow queue. + * This skb must be the return value of prior fq_peek(). + */ +static void fq_dequeue_skb(struct Qdisc *sch, struct fq_flow *flow, + struct sk_buff *skb) { - struct sk_buff *skb = fq_peek(flow); - - if (skb) { - fq_erase_head(sch, flow, skb); - skb_mark_not_on_list(skb); - flow->qlen--; - qdisc_qstats_backlog_dec(sch, skb); - sch->q.qlen--; - } - return skb; + fq_erase_head(sch, flow, skb); + skb_mark_not_on_list(skb); + flow->qlen--; + qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; } static void flow_queue_add(struct fq_flow *flow, struct sk_buff *skb) @@ -538,9 +536,11 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch) if (!sch->q.qlen) return NULL; - skb = fq_dequeue_head(sch, &q->internal); - if (skb) + skb = fq_peek(&q->internal); + if (unlikely(skb)) { + fq_dequeue_skb(sch, &q->internal, skb); goto out; + } q->ktime_cache = now = ktime_get_ns(); fq_check_throttled(q, now); @@ -580,10 +580,8 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch) INET_ECN_set_ce(skb); q->stat_ce_mark++; } - } - - skb = fq_dequeue_head(sch, f); - if (!skb) { + fq_dequeue_skb(sch, f, skb); + } else { head->first = f->next; /* force a pass through old_flows to prevent starvation */ if ((head == &q->new_flows) && q->old_flows.first) {