From patchwork Tue Feb 14 21:51:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 93982 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp1758623qgi; Tue, 14 Feb 2017 13:51:30 -0800 (PST) X-Received: by 10.84.133.163 with SMTP id f32mr38655767plf.64.1487109090383; Tue, 14 Feb 2017 13:51:30 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 85si1635045pfo.117.2017.02.14.13.51.30; Tue, 14 Feb 2017 13:51:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751752AbdBNVv3 (ORCPT + 1 other); Tue, 14 Feb 2017 16:51:29 -0500 Received: from mail-wm0-f52.google.com ([74.125.82.52]:38081 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752651AbdBNVv3 (ORCPT ); Tue, 14 Feb 2017 16:51:29 -0500 Received: by mail-wm0-f52.google.com with SMTP id r141so27800353wmg.1 for ; Tue, 14 Feb 2017 13:51:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SeoQ0Vx/6vcBN/r54e9BxkzqpM/24bwd4P0CWWDfuWA=; b=TQ4vEPJejVFGb1p35PcIWXUAvgrGCxGHDwh9ifnBVWwNNSS5swJgGrOfZo2IH0NuGM rAMBplf0QDQufg4Ofr3Ivwoh+cWq4TEZ3s5EUB1s5cJ1VqSaJ7z3H2PEfYvntrt+fHtU qIrDBTjTyUaKWGyE731Gudn9s1rRACyl2YgO4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SeoQ0Vx/6vcBN/r54e9BxkzqpM/24bwd4P0CWWDfuWA=; b=CfPkcCzR6XC5vzWIEZc0AJjR7MQGCl4WWJ95bzgf5pZ5dZxOf3HjM37sbY4ZEiv9QA vMSp3r0lIRUA4ozljbKawhqOmOCwpP5ibHwPcLcwNhOUAZ2nCq7DWNG8cAcP+SCzg73V 19o0EIqIpognbOcUbTZuCw9Paz2r2gGDc9zqqci6LZzCTXmC+ssYzJ/bVWRJcU8BY9x2 HzqnI4mV6/7tJ/lwuVuWGyfGUe/UiBUzN0KAbY7na8lf4jwsWv6zjyUiThQhQ70bkk8d M0AP1F7R0vh3O11XP9mT2+rAukDdXpUQpmSUvSka9TgU5kzDpPktOWmxw2XcgPjLrno3 47Vg== X-Gm-Message-State: AMke39nUP9/9BGYbxr4B+65r9zeHInhJaXx8TqH0aHRPCo06jTe+3Cb6LJja6eWAOnNwng5U X-Received: by 10.28.232.91 with SMTP id f88mr5071714wmh.27.1487109087473; Tue, 14 Feb 2017 13:51:27 -0800 (PST) Received: from localhost.localdomain ([196.80.229.213]) by smtp.gmail.com with ESMTPSA id b17sm2804289wma.33.2017.02.14.13.51.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 14 Feb 2017 13:51:26 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au Cc: Ard Biesheuvel , "Jason A . Donenfeld" Subject: [PATCH v2 2/2] crypto: algapi - annotate expected branch behavior in crypto_inc() Date: Tue, 14 Feb 2017 21:51:02 +0000 Message-Id: <1487109062-3419-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487109062-3419-1-git-send-email-ard.biesheuvel@linaro.org> References: <1487109062-3419-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To prevent unnecessary branching, mark the exit condition of the primary loop as likely(), given that a carry in a 32-bit counter occurs very rarely. On arm64, the resulting code is emitted by GCC as 9a8: cmp w1, #0x3 9ac: add x3, x0, w1, uxtw 9b0: b.ls 9e0 9b4: ldr w2, [x3,#-4]! 9b8: rev w2, w2 9bc: add w2, w2, #0x1 9c0: rev w4, w2 9c4: str w4, [x3] 9c8: cbz w2, 9d0 9cc: ret where the two remaining branch conditions (one for size < 4 and one for the carry) are statically predicted as non-taken, resulting in optimal execution in the vast majority of cases. Also, replace the open coded alignment test with IS_ALIGNED(). Cc: Jason A. Donenfeld Signed-off-by: Ard Biesheuvel --- v2: no change crypto/algapi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/crypto/algapi.c b/crypto/algapi.c index 6b52e8f0b95f..9eed4ef9c971 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -963,11 +963,11 @@ void crypto_inc(u8 *a, unsigned int size) u32 c; if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || - !((unsigned long)b & (__alignof__(*b) - 1))) + IS_ALIGNED((unsigned long)b, __alignof__(*b))) for (; size >= 4; size -= 4) { c = be32_to_cpu(*--b) + 1; *b = cpu_to_be32(c); - if (c) + if (likely(c)) return; }