From patchwork Thu Jun 27 10:26:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167899 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2148374ilk; Thu, 27 Jun 2019 03:27:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqwKVZKPyDGnPxbEeIhhaIqW3GUvOxwqF668Ao0a4SCv6IkZTEYCkqRnsigWv8o7pxb9LQ5o X-Received: by 2002:a65:498f:: with SMTP id r15mr3186198pgs.37.1561631274936; Thu, 27 Jun 2019 03:27:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561631274; cv=none; d=google.com; s=arc-20160816; b=b8fpUA1KKVswlZVeOwuuw4j/gK0Ko8nmcxE+Z62oQ/k+q616csvXvv25LnoU/bFAQO dmV/b3rgPtFtVsWBpsRBqWVsdeZKwS4fXI6swLPKgEbsfLjU50/yN7zADYyH/UN1TMah 9cBgdO/Fz4IZThu+KcFk1jy0LVpfozeOHtz9u3nqDzbMPEBMt4Hy5B6qwLPG5Xmy5KrB a7O3jj3rBlWWydpXLO3DBbvprxBzuBBzvxmApjZMQa+5ud7oFLyGyRAbov2uG3jPTe2c gxA1376NIZf60LGsLtnEeXQJSrafN1sftPS1wIbOz5eaWRdn/ovgGCrsRZ+DzYqxGJme s3jQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9jtH53Jk7KStQcdKvZdfqjCIjywTrKC4erKe/oL3K6E=; b=H6h+cjaxbYko+wnis1Phnoj/ICazI5HwgPVtr6+7pbxLE7RT6oC3XKq+xKkJ+x6I0W iPqGttMoaLUgwl7zvKK7lgjPSNstKFxykemOuRcVOEAWitPT/bxUctyqKhtWq3E2KRtG G9xJ447q+HSV8FLQwu+Cyieo72tcZRSwLnTyqsy5ANThB+As2lCIU6yv8mhzYbxK0PlU g1G8Cv0Le/N5WH4zZy8FU5/NAWdI7U/7NsYNjDiOO/AGrO+WxQIAJJczl2wykXSGhj3+ IIyFJaBvdEs5EU6qrN0TcGWMp5hgfgl5qG0o6PDxtp/uHIcCVxWp9S6/niwdsU4bU19a fGCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=czGwJyXJ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f32si4829522pjg.42.2019.06.27.03.27.54; Thu, 27 Jun 2019 03:27:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=czGwJyXJ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726663AbfF0K1y (ORCPT + 3 others); Thu, 27 Jun 2019 06:27:54 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38267 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726659AbfF0K1y (ORCPT ); Thu, 27 Jun 2019 06:27:54 -0400 Received: by mail-wr1-f67.google.com with SMTP id d18so1921486wrs.5 for ; Thu, 27 Jun 2019 03:27:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9jtH53Jk7KStQcdKvZdfqjCIjywTrKC4erKe/oL3K6E=; b=czGwJyXJ+zNVPQyK6xTMltzIoJ0kfvneThDNU1tLZ+bXDWbKeo0ksXTFAIedwNjgF+ h9h6Lhd9GwGzieKWif/HPYN9bTlF9ZeZ9dNMThONufUH1XKy5LMkE53FhDtMmY6/BRky EHGrmRKxnH+LbssJ0phJeeqANidGiQQXjUF4wL05n6nlpTxlbAVGTLT2dYuup5c1LZ8X K+fXRGJaE/STIMSQzI5E5ZbWngjzkhuDtBRzMliyJiKl0Kh/D9BZ4v9jAZw9ig050LLQ 6kkG2IDS3yCyeCoh6iqxeD6F2YE+rHcZxb6v3Eb/aTu8Q7dqL62h0rp1D1TcRvhQh1Xu ffUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9jtH53Jk7KStQcdKvZdfqjCIjywTrKC4erKe/oL3K6E=; b=GKoQiOP+KaO0u3WYnY5NYg0sGT9WQGs6zH61E5owwJ662kUrO6mqkJypmohiacWBcV VenzRm8YYOHue0bxpy8GJm8Zv6Gf0/hCIRRRm8nvk44sENUoVNwMHZpeyWRdMXz8bF3O WV7ZTfac1M+usJMlaE7nRRUzx+Cg8ptnLDN86OZeSK76BhH/X+AiPa8eSRU2bI0illEN MCOMFza8nOWh88pq82vhgwegMPO/idYEUdHrYyUyquFS1Zkdyq97Z0h5p1CkZwWz6S0W pztAEDkX199pgI9hBZrKHpqnZxwKDLao9HssnXkAlADOQBUXVHmRgNAMngVLLEFkkvOn EdzA== X-Gm-Message-State: APjAAAWYyXHUBlCMWeri75UfeLSmuk/NAjrMY2NB6rKg+DPDkK88LLQk xYmXnBpo2d5J+3sm6Bki0ADt/vA5O+M= X-Received: by 2002:a05:6000:11c2:: with SMTP id i2mr2723505wrx.199.1561631272143; Thu, 27 Jun 2019 03:27:52 -0700 (PDT) Received: from localhost.localdomain (aaubervilliers-681-1-8-173.w90-88.abo.wanadoo.fr. [90.88.13.173]) by smtp.gmail.com with ESMTPSA id g2sm5584533wmh.0.2019.06.27.03.27.50 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 27 Jun 2019 03:27:51 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH v3 02/32] crypto: aes - rename local routines to prevent future clashes Date: Thu, 27 Jun 2019 12:26:17 +0200 Message-Id: <20190627102647.2992-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190627102647.2992-1-ard.biesheuvel@linaro.org> References: <20190627102647.2992-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename some local AES encrypt/decrypt routines so they don't clash with the names we are about to introduce for the routines exposed by the generic AES library. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-cipher-glue.c | 8 ++++---- arch/arm64/crypto/aes-cipher-glue.c | 8 ++++---- arch/x86/crypto/aesni-intel_glue.c | 8 ++++---- crypto/aes_generic.c | 8 ++++---- drivers/crypto/padlock-aes.c | 8 ++++---- 5 files changed, 20 insertions(+), 20 deletions(-) -- 2.20.1 diff --git a/arch/arm/crypto/aes-cipher-glue.c b/arch/arm/crypto/aes-cipher-glue.c index c222f6e072ad..f6c07867b8ff 100644 --- a/arch/arm/crypto/aes-cipher-glue.c +++ b/arch/arm/crypto/aes-cipher-glue.c @@ -19,7 +19,7 @@ EXPORT_SYMBOL(__aes_arm_encrypt); asmlinkage void __aes_arm_decrypt(u32 *rk, int rounds, const u8 *in, u8 *out); EXPORT_SYMBOL(__aes_arm_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -27,7 +27,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm_encrypt(ctx->key_enc, rounds, in, out); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -47,8 +47,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt, + .cra_cipher.cia_encrypt = aes_arm_encrypt, + .cra_cipher.cia_decrypt = aes_arm_decrypt, #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS .cra_alignmask = 3, diff --git a/arch/arm64/crypto/aes-cipher-glue.c b/arch/arm64/crypto/aes-cipher-glue.c index 7288e7cbebff..0e90b06ebcec 100644 --- a/arch/arm64/crypto/aes-cipher-glue.c +++ b/arch/arm64/crypto/aes-cipher-glue.c @@ -18,7 +18,7 @@ EXPORT_SYMBOL(__aes_arm64_encrypt); asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); EXPORT_SYMBOL(__aes_arm64_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -26,7 +26,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm64_encrypt(ctx->key_enc, out, in, rounds); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -46,8 +46,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt + .cra_cipher.cia_encrypt = aes_arm64_encrypt, + .cra_cipher.cia_decrypt = aes_arm64_decrypt }; static int __init aes_init(void) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index c95bd397dc07..836d50bd096f 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -349,7 +349,7 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, return aes_set_key_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len); } -static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); @@ -362,7 +362,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) } } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +static void aesni_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); @@ -923,8 +923,8 @@ static struct crypto_alg aesni_cipher_alg = { .cia_min_keysize = AES_MIN_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE, .cia_setkey = aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt + .cia_encrypt = aesni_encrypt, + .cia_decrypt = aesni_decrypt } } }; diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index f217568917e4..3aa4a715c216 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -1332,7 +1332,7 @@ EXPORT_SYMBOL_GPL(crypto_aes_set_key); f_rl(bo, bi, 3, k); \ } while (0) -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1402,7 +1402,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) i_rl(bo, bi, 3, k); \ } while (0) -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1454,8 +1454,8 @@ static struct crypto_alg aes_alg = { .cia_min_keysize = AES_MIN_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE, .cia_setkey = crypto_aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt + .cia_encrypt = crypto_aes_encrypt, + .cia_decrypt = crypto_aes_decrypt } } }; diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c index 09d823d36d3a..854539512c35 100644 --- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -299,7 +299,7 @@ static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key, return iv; } -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void padlock_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct aes_ctx *ctx = aes_ctx(tfm); @@ -308,7 +308,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) padlock_store_cword(&ctx->cword.encrypt); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void padlock_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct aes_ctx *ctx = aes_ctx(tfm); @@ -331,8 +331,8 @@ static struct crypto_alg aes_alg = { .cia_min_keysize = AES_MIN_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE, .cia_setkey = aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt, + .cia_encrypt = padlock_aes_encrypt, + .cia_decrypt = padlock_aes_decrypt, } } };