From patchwork Tue Feb 25 12:57:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 25293 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f198.google.com (mail-ve0-f198.google.com [209.85.128.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 76FF920543 for ; Tue, 25 Feb 2014 12:58:06 +0000 (UTC) Received: by mail-ve0-f198.google.com with SMTP id oy12sf889303veb.1 for ; Tue, 25 Feb 2014 04:58:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=CgWj72HEkkl4GyRA5NXowuMKsWw64/Is0ZKzRe0aA04=; b=Vs6F+Ji2+gyqQQL2z0Fts0Cw38jf1P88DHZgG/5uCgzZ/gnVvxWZbEFepKC6jyk53t WnUb+JEadFzFk/mykWJs1VpZgPHY8Fcx+YP3cW1y4Dy1gbgLZvwdfzA8oMiUqWcu2fcI wOj85KmeVyOEAQB0NQR2r+CkHHjQ7fWyH5k20P2wdhkBzk3pvN04Ikyx9BvfQvbVAqfy +NelFNzl37m2GzqQ56v0iIg/R/qgjrDAcKIQICqW/7PxGwo3lYYb3W0jpckXKBwIhZ8y CB/+TGyezZtzRiQpa7kCkl5pAvWnav9dsHuQ+ajUtHAP3gGsnQ7V7uTcbdoi87/wcr3H AUwg== X-Gm-Message-State: ALoCoQkKdcgGm0x9NKNmIpSunkRqHYa5jRm2hbvisUECrC4g2zhnxnQ5eq6SEiZvOup9QZEJsZSp X-Received: by 10.224.47.129 with SMTP id n1mr13332325qaf.4.1393333086260; Tue, 25 Feb 2014 04:58:06 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.26.35 with SMTP id 32ls625657qgu.80.gmail; Tue, 25 Feb 2014 04:58:06 -0800 (PST) X-Received: by 10.220.67.18 with SMTP id p18mr1078007vci.14.1393333086114; Tue, 25 Feb 2014 04:58:06 -0800 (PST) Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by mx.google.com with ESMTPS id si7si6756456vdc.63.2014.02.25.04.58.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Feb 2014 04:58:06 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.174; Received: by mail-vc0-f174.google.com with SMTP id im17so7080578vcb.5 for ; Tue, 25 Feb 2014 04:58:06 -0800 (PST) X-Received: by 10.58.133.15 with SMTP id oy15mr988287veb.19.1393333086027; Tue, 25 Feb 2014 04:58:06 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp136849vcz; Tue, 25 Feb 2014 04:58:05 -0800 (PST) X-Received: by 10.68.137.67 with SMTP id qg3mr6578151pbb.6.1393333085088; Tue, 25 Feb 2014 04:58:05 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id yc8si3529457pbc.277.2014.02.25.04.58.04; Tue, 25 Feb 2014 04:58:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752464AbaBYM6C (ORCPT + 1 other); Tue, 25 Feb 2014 07:58:02 -0500 Received: from mail-we0-f169.google.com ([74.125.82.169]:36871 "EHLO mail-we0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751463AbaBYM6B (ORCPT ); Tue, 25 Feb 2014 07:58:01 -0500 Received: by mail-we0-f169.google.com with SMTP id t61so359449wes.0 for ; Tue, 25 Feb 2014 04:57:59 -0800 (PST) X-Received: by 10.180.8.170 with SMTP id s10mr1334931wia.35.1393333079651; Tue, 25 Feb 2014 04:57:59 -0800 (PST) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id k10sm50553443wjf.11.2014.02.25.04.57.58 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Feb 2014 04:57:59 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au Cc: Ard Biesheuvel Subject: [RFC PATCH 1/3] crypto: remove direct blkcipher_walk dependency on transform Date: Tue, 25 Feb 2014 13:57:50 +0100 Message-Id: <1393333072-15310-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1393333072-15310-1-git-send-email-ard.biesheuvel@linaro.org> References: <1393333072-15310-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.174 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , In order to allow other uses of the blkcipher walk API than the blkcipher algos themselves, this patch copies some of the transform data members to the walk struct so the transform is only accessed at walk init time. Signed-off-by: Ard Biesheuvel --- crypto/blkcipher.c | 67 ++++++++++++++++++++++++------------------------- include/crypto/algapi.h | 5 +++- 2 files changed, 37 insertions(+), 35 deletions(-) diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c index a79e7e9ab86e..46fdab5e9cc7 100644 --- a/crypto/blkcipher.c +++ b/crypto/blkcipher.c @@ -70,14 +70,12 @@ static inline u8 *blkcipher_get_spot(u8 *start, unsigned int len) return max(start, end_page); } -static inline unsigned int blkcipher_done_slow(struct crypto_blkcipher *tfm, - struct blkcipher_walk *walk, +static inline unsigned int blkcipher_done_slow(struct blkcipher_walk *walk, unsigned int bsize) { u8 *addr; - unsigned int alignmask = crypto_blkcipher_alignmask(tfm); - addr = (u8 *)ALIGN((unsigned long)walk->buffer, alignmask + 1); + addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1); addr = blkcipher_get_spot(addr, bsize); scatterwalk_copychunks(addr, &walk->out, bsize, 1); return bsize; @@ -105,7 +103,6 @@ static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk, int blkcipher_walk_done(struct blkcipher_desc *desc, struct blkcipher_walk *walk, int err) { - struct crypto_blkcipher *tfm = desc->tfm; unsigned int nbytes = 0; if (likely(err >= 0)) { @@ -117,7 +114,7 @@ int blkcipher_walk_done(struct blkcipher_desc *desc, err = -EINVAL; goto err; } else - n = blkcipher_done_slow(tfm, walk, n); + n = blkcipher_done_slow(walk, n); nbytes = walk->total - n; err = 0; @@ -136,7 +133,7 @@ err: } if (walk->iv != desc->info) - memcpy(desc->info, walk->iv, crypto_blkcipher_ivsize(tfm)); + memcpy(desc->info, walk->iv, walk->ivsize); if (walk->buffer != walk->page) kfree(walk->buffer); if (walk->page) @@ -226,22 +223,20 @@ static inline int blkcipher_next_fast(struct blkcipher_desc *desc, static int blkcipher_walk_next(struct blkcipher_desc *desc, struct blkcipher_walk *walk) { - struct crypto_blkcipher *tfm = desc->tfm; - unsigned int alignmask = crypto_blkcipher_alignmask(tfm); unsigned int bsize; unsigned int n; int err; n = walk->total; - if (unlikely(n < crypto_blkcipher_blocksize(tfm))) { + if (unlikely(n < walk->cipher_blocksize)) { desc->flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN; return blkcipher_walk_done(desc, walk, -EINVAL); } walk->flags &= ~(BLKCIPHER_WALK_SLOW | BLKCIPHER_WALK_COPY | BLKCIPHER_WALK_DIFF); - if (!scatterwalk_aligned(&walk->in, alignmask) || - !scatterwalk_aligned(&walk->out, alignmask)) { + if (!scatterwalk_aligned(&walk->in, walk->alignmask) || + !scatterwalk_aligned(&walk->out, walk->alignmask)) { walk->flags |= BLKCIPHER_WALK_COPY; if (!walk->page) { walk->page = (void *)__get_free_page(GFP_ATOMIC); @@ -250,12 +245,12 @@ static int blkcipher_walk_next(struct blkcipher_desc *desc, } } - bsize = min(walk->blocksize, n); + bsize = min(walk->walk_blocksize, n); n = scatterwalk_clamp(&walk->in, n); n = scatterwalk_clamp(&walk->out, n); if (unlikely(n < bsize)) { - err = blkcipher_next_slow(desc, walk, bsize, alignmask); + err = blkcipher_next_slow(desc, walk, bsize, walk->alignmask); goto set_phys_lowmem; } @@ -277,28 +272,26 @@ set_phys_lowmem: return err; } -static inline int blkcipher_copy_iv(struct blkcipher_walk *walk, - struct crypto_blkcipher *tfm, - unsigned int alignmask) +static inline int blkcipher_copy_iv(struct blkcipher_walk *walk) { - unsigned bs = walk->blocksize; - unsigned int ivsize = crypto_blkcipher_ivsize(tfm); - unsigned aligned_bs = ALIGN(bs, alignmask + 1); - unsigned int size = aligned_bs * 2 + ivsize + max(aligned_bs, ivsize) - - (alignmask + 1); + unsigned bs = walk->walk_blocksize; + unsigned aligned_bs = ALIGN(bs, walk->alignmask + 1); + unsigned int size = aligned_bs * 2 + + walk->ivsize + max(aligned_bs, walk->ivsize) - + (walk->alignmask + 1); u8 *iv; - size += alignmask & ~(crypto_tfm_ctx_alignment() - 1); + size += walk->alignmask & ~(crypto_tfm_ctx_alignment() - 1); walk->buffer = kmalloc(size, GFP_ATOMIC); if (!walk->buffer) return -ENOMEM; - iv = (u8 *)ALIGN((unsigned long)walk->buffer, alignmask + 1); + iv = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1); iv = blkcipher_get_spot(iv, bs) + aligned_bs; iv = blkcipher_get_spot(iv, bs) + aligned_bs; - iv = blkcipher_get_spot(iv, ivsize); + iv = blkcipher_get_spot(iv, walk->ivsize); - walk->iv = memcpy(iv, walk->iv, ivsize); + walk->iv = memcpy(iv, walk->iv, walk->ivsize); return 0; } @@ -306,7 +299,10 @@ int blkcipher_walk_virt(struct blkcipher_desc *desc, struct blkcipher_walk *walk) { walk->flags &= ~BLKCIPHER_WALK_PHYS; - walk->blocksize = crypto_blkcipher_blocksize(desc->tfm); + walk->walk_blocksize = crypto_blkcipher_blocksize(desc->tfm); + walk->cipher_blocksize = walk->walk_blocksize; + walk->ivsize = crypto_blkcipher_ivsize(desc->tfm); + walk->alignmask = crypto_blkcipher_alignmask(desc->tfm); return blkcipher_walk_first(desc, walk); } EXPORT_SYMBOL_GPL(blkcipher_walk_virt); @@ -315,7 +311,10 @@ int blkcipher_walk_phys(struct blkcipher_desc *desc, struct blkcipher_walk *walk) { walk->flags |= BLKCIPHER_WALK_PHYS; - walk->blocksize = crypto_blkcipher_blocksize(desc->tfm); + walk->walk_blocksize = crypto_blkcipher_blocksize(desc->tfm); + walk->cipher_blocksize = walk->walk_blocksize; + walk->ivsize = crypto_blkcipher_ivsize(desc->tfm); + walk->alignmask = crypto_blkcipher_alignmask(desc->tfm); return blkcipher_walk_first(desc, walk); } EXPORT_SYMBOL_GPL(blkcipher_walk_phys); @@ -323,9 +322,6 @@ EXPORT_SYMBOL_GPL(blkcipher_walk_phys); static int blkcipher_walk_first(struct blkcipher_desc *desc, struct blkcipher_walk *walk) { - struct crypto_blkcipher *tfm = desc->tfm; - unsigned int alignmask = crypto_blkcipher_alignmask(tfm); - if (WARN_ON_ONCE(in_irq())) return -EDEADLK; @@ -335,8 +331,8 @@ static int blkcipher_walk_first(struct blkcipher_desc *desc, walk->buffer = NULL; walk->iv = desc->info; - if (unlikely(((unsigned long)walk->iv & alignmask))) { - int err = blkcipher_copy_iv(walk, tfm, alignmask); + if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { + int err = blkcipher_copy_iv(walk); if (err) return err; } @@ -353,7 +349,10 @@ int blkcipher_walk_virt_block(struct blkcipher_desc *desc, unsigned int blocksize) { walk->flags &= ~BLKCIPHER_WALK_PHYS; - walk->blocksize = blocksize; + walk->walk_blocksize = blocksize; + walk->cipher_blocksize = crypto_blkcipher_blocksize(desc->tfm); + walk->ivsize = crypto_blkcipher_ivsize(desc->tfm); + walk->alignmask = crypto_blkcipher_alignmask(desc->tfm); return blkcipher_walk_first(desc, walk); } EXPORT_SYMBOL_GPL(blkcipher_walk_virt_block); diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index e73c19e90e38..d9d14a0f0653 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -100,9 +100,12 @@ struct blkcipher_walk { void *page; u8 *buffer; u8 *iv; + unsigned int ivsize; int flags; - unsigned int blocksize; + unsigned int walk_blocksize; + unsigned int cipher_blocksize; + unsigned int alignmask; }; struct ablkcipher_walk {